The 2012 State Liability Systems Ranking Study was conducted for the U.S. Chamber Institute for Legal Reform by Harris Interactive. The final results are based on interviews with a nationally representative sample of 1,125 in-house general counsel, senior litigators or attorneys, and other senior executives who are knowledgeable about litigation matters at public and private companies with annual revenues of at least $100 million. Phone interviews averaging 19 minutes in length were conducted with a total of 551 respondents and took place between March 19, 2012 and June 25, 2012. Online interviews using the same questionnaire and averaging 16 minutes in length were conducted with a total of 574 respondents that took place between March 13, 2012 and June 25, 2012. The previous research was conducted from October to January in the years 2002–2010.
For the telephone sample, a comprehensive list of general counsel at companies with annual revenues of at least $100 million was compiled using idExec, Dun & Bradstreet (Hoovers), AMI, and ALM. An alert letter was sent to the general counsel at each company. This letter provided general information about the study, notified them of the option to take the survey online or by phone, and told them that an interviewer from Harris Interactive would be contacting them to request their participation if they chose not to take the survey online. The letter included an 800 number for respondents to call and schedule a survey appointment, and it also alerted the general counsel to a $100 charitable incentive or check in exchange for qualified participation in the study.
For the online sample, a representative sample of general counsel and other senior attorneys was drawn from Hoovers ConnectMail, the Association of Corporate Counsel, and LinkedIn. Respondents from Hoovers ConnectMail and the ACC received an electronic version of the alert letter, which included a password-protected link to take the survey. LinkedIn respondents received a public link. All were screened to ensure that they worked for companies with more than $100 million in annual revenues.
A vast majority (83%) of respondents were general counsel, corporate counsel, associate or assistant counsel, or some other senior litigator or attorney. The remaining respondents were senior executives knowledgeable about or responsible for litigation at their companies. Respondents had an average of 21 years of relevant legal experience, including their current position, and had been involved in or familiar with litigation at their current companies for an average of 10 years. Most respondents (81%) were familiar with or had litigated in the states they rated within the past three years. The most common industry sector represented was manufacturing, followed by services.
Telephone Interviewing Procedures
The telephone interviews utilized a computer-assisted telephone interviewing (CATI) system, whereby trained interviewers call and immediately input responses into the computer. This system greatly enhances reporting reliability. It reduces clerical error by eliminating the need for keypunching, since interviewers enter respondent answers directly into a computer terminal during the interview itself. This data entry program does not permit interviewers to inadvertently skip questions, since each question must be answered before the computer moves on to the next question. The data entry program also ensures that all skip patterns are correctly followed. The online data editing system refuses to accept punches that are out-of-range, it demands confirmation of responses that exceed expected ranges, and asks for explanations for inconsistencies between certain key responses.
To achieve high participation, in addition to the alert letters, numerous telephone callbacks were made to reach respondents and conduct the interviews at a convenient time. Interviewers also offered to send respondents an e-mail invitation so that respondents could take the survey online on their own time.
Online Interviewing Procedures
All online interviews were hosted on Harris Interactive’s server and were conducted using a self-administered, online questionnaire via proprietary Web-assisted interviewing software. The mail version of the alert letter directed respondents to a URL and provided participants with a unique ID and password that they were required to enter on the landing page of the survey. Those who received an e-mail version of the alert letter accessed the survey by clicking on the password-protected URL included in the e-mail. Due to password protection, it was not possible for a respondent to answer the survey more than once. Respondents for whom we had e-mail addresses received an initial invitation as well as one to two reminder e-mails.
After determining that respondents were qualified to participate in the survey, interviewers identified the state liability systems with which the respondents were familiar. Then the respondents were asked to identify the last time they litigated in or were familiar with the states’ liability systems. From there, respondents were given the opportunity to evaluate the states’ liability systems, prioritized by most recent litigation experience. On average, respondents evaluated four states via telephone and five states online.
Rating and Scoring of States
States were given a grade (A through F) by respondents for each of the key elements of their liability systems, providing a rating of the states by these grades, the percentage of respondents giving each grade, and the mean grade for each element. The mean grade was calculated by converting the letter grade using a 5.0 scale where A = 5.0, B = 4.0, C = 3.0, D = 2.0, and F = 1.0. Therefore, the mean score displayed can also be interpreted as a letter grade. For example, a mean score of 2.8 is roughly a C- grade.
The Overall Ranking of State Liability Systems table was developed by creating an index using the grades given on each of the key elements plus the overall performance grade. All of the key elements were highly correlated with one another and with overall performance. The differences in the relationship between each element and overall performance were trivial, so it was determined that each element should contribute equally to the index score. To create the index, each grade across the elements plus the overall performance grade were rescaled from 0 to 100 (A = 100, B = 75, C = 50, D= 25, and F = 0). Then, any evaluation that contained 6 or more “not sure” or “decline to answer” responses per state was removed. A total of 7.1% of state evaluations were unusable. From the usable evaluations, the scores on the elements were then averaged together to create the index score from 0 to 100.
The scores displayed in this report have been rounded to one decimal point. However, when developing the ranking, scores were evaluated based on two decimal points. Therefore, states that appear tied based upon the scores in this report were not tied when two decimal points were taken into consideration. The scores for states that appear tied based on one decimal place are Iowa (69.49) and South Dakota (69.48), Arkansas (57.23) and Texas (57.15), and South Carolina (56.34) and Pennsylvania (56.29).
For the Ranking on Key Elements tables, a score was calculated per element for each state based on the 0–100 rescaled performance grades. The states were then ranked by their mean scores on that element.
Reliability of Survey Percentages
The results from any sample survey are subject to sampling variation. The sampling variation (or error) that applies to the results for this survey of 1,125 respondents is plus or minus 2.9 percentage points. That is, the chances are 95 in 100 that a survey result does not vary, plus or minus, by more than 2.9 percentage points from the result that would have been obtained if interviews were conducted with all persons in the universe represented by the sample. Note that survey results based on subgroups of smaller sizes can be subject to larger sampling error.
Sampling error of the type so far discussed is only one type of error. Survey research is also susceptible to other types of error, such as refusals to be interviewed (nonresponse error), question wording and question order, interviewer error, and weighting by demographic control data. Although it is difficult or impossible to quantify these types of error, the procedures followed by Harris Interactive keep errors of these types to a minimum.