2019 Symposium Note: Jury Decision Making Processes

[PDF]

Rebecca Walker[*]

A question that weighs heavily on the minds of many lawyers is: what causes juries to make the decisions they do? A lively audience attended the Denver Law Review’s symposium, Driven by Data: Empirical Studies in Civil Litigation and Health Law for a panel discussion on the process of jury decision making, attempting to unravel that very question.

The discussion was guided by the learned mind of Jim Gilbert, shareholder and founding partner at the Gilbert Law Group, and a “Best Lawyers” lawyer of the year.[1] An impressive panel consisting of academics from across the nation joined Mr. Gilbert. The first panelist, Ms. Sara Gordon, is a professor and associate dean at the William S. Boyd School of Law, whose research focuses on mental health law.[2] The next panelist, Ms. Valeria Hans, is a professor at Cornell Law School who studies juries both nationally and internationally.[3] Lastly, Mr. Hillel Bavli is a professor at the Dedman School of Law who specializes in applying statistics to law.[4]

Professor Gordon spoke first, starting the panel off by shattering the audiences preconceived biases. She encouraged the audience to recognize that there are certain facts, which the populace takes to be true, when in reality those facts are completely false. Professor Gordon began with a simple question: does sugar make a child hyper? Sugar does not make children hyper; rather our belief that sugar makes children hyper is an example of confirmation bias. And perhaps a more shocking example of confirmation bias is the fact that the recidivism rate for rape and sexual assault is 1.7%, not 80% as the Supreme Court cited to in McKune v. Lile.[5]

Professor Gordon further examined confirmation bias and its effect on juries. She noted that movies and television shows present forensic evidence as infallible—firearm comparisons, hair samples, bite marks, and more all appear on cable TV, but do they have a place in our courtrooms? Years of bad convictions weigh against these popular forensic evidence schemes. Professor Gordon cited to a PSCAT report, made by an advisory group of scientists and engineers who studied whether there were additional scientific means for ensuring the validity of forensic evidence.[6]

An examination of latent fingerprints within the PCAST report demonstrates how confirmation bias can make certain applications of forensic evidence dangerous. Latent prints are impressions from fingers and palms which can be compared to known prints to assess whether the prints came from the same source. This type of analysis has been used for more than a century; however, it has rarely been studied. Most of the studies that exist are dubious, yet the process is assumed to be foolproof. The PCAST report stated that only two studies on latent prints, one in 2011 and one in 2014, should be considered reliable. The report concluded that there are circumstances in which latent prints can be used correctly; however, false positives occur frequently and the results are not perfect.

Professor Gordon concluded by warning the audience of the dangers of assuming forensic evidence is true—a problem compounded by the preexisting notions jurors have when they enter the courtroom. She hopes that practitioners will become more apt to challenge such types of evidence, and that courts will be more accepting of such challenges.

Professor Hans delivered next. She spoke about her research on determining what leads juries to give certain awards. The core of her research is on what she calls the “gist” model, a duel process model that acknowledges how people encode the surface detail of information as well as the “gist” of the underlying meaning of information. Professor Hans proposed that jurors make both categorical and ordinal judgments of the “gist” of an injury when they attempt to assess damage awards. Jurors will place a judgment ordinally on a scale from least severe to most severe, then they attempt to match that particular injury judgment with a dollar amount.

Professor Hans and her research team have tested different theories in an attempt to help jurors understand the “gist.” They have conducted studies in which they provided meaningful anchor numbers, offered guidance about how to arrive at appropriate damage awards, and examined the juror’s numeracy abilities. Professor Hans’ research team asked the jurors to describe how they arrived at their hypothetical award amount. The results were illuminating. Many jurors called pain and suffering “priceless,” eluding that pain and suffering could not be fixed with money. Further, the study showed that when a juror spoke of the plaintiff favorably, they were more likely to give a higher award. Likewise, interference with the plaintiff’s life from sustained injuries would lead to higher payments. On the other hand, jurors often considered the blameworthiness of the defendant and were hesitant to award high damages against a relatively blameless defendant.  Professor Hans called this hesitancy a fusion between the liability phase and the damages phase. She stated that anchoring, the process of tying a monetary value to a set point, was infrequently seen. However, the anchoring process did appear to be meaningful to those who used it.

Professor Hans concluded that emphasizing interferences with the plaintiff’s life was associated with higher damages, as were meaningful anchors; however, a defendant’s lack of blameworthiness would lead to lower awards and could even lead to under-compensation for the plaintiff.

Lastly, the panel introduced Professor Bavli, who addressed the unpredictability of juries awarding pain and suffering damages. Professor Bavli began his lecture by attempting to predict Roberto Clemente’s batting average and examining the best way to do such a task. He stated that the best way to predict Clemente’s batting average was to use a shrinkage formula. The formula combines Clemente’s batting average with those of other players. Professor Bavli then examined whether this method could be applied in a damages context.

Professor Bavli explained that a lack of guidance from the court contributes to inconsistency in awards from juries. One jury may award nothing, while another may award millions. To combat this discrepancy, Professor Bavli studied “Comparable-Case Guidance,” also known as “prior-award information.” Comparable-Case Guidance is made up of three components: information regarding awards in comparable prior cases, information which is then considered by the trier of fact, and information used as guidance only.

To study the abovementioned discrepancy, Professor Bavli used a factorial design and a potential outcomes framework.  He recruited approximately 10,000 people to participate in the experiment. Then, each participant was randomized to one of the twenty-two treatment conditions. The participants were separated into punitive damages and pain and suffering groups, then divided further into a control and a treatment group. The treatment group was then separated into other smaller groups, such as bias and variables.

The experiment was large, involving approximately 500 hypotheses tests. From all of these tests, Professor Bavli was able to discern that prior award information increased accuracy in every treatment condition. Prior award information reduced the variability in the award and minimized any bias that had been introduced.

When participants were given the option of explaining their award, Professor Bavli found that participants with prior award information were more likely to give such an explanation. He pondered if this was a reflection of the amount of thoughtfulness that went into the award.

Professor Bavli admits there are limitations to his study, including the fact that (1) the jury and jurors were both mock; (2) the experiments were vignette based, rather than based on real world trials; and (3) Mechanical Turk was used to reach the study’s participants. Professor Bavli and his colleagues addressed and attempted to mitigate these limitations.

Professor Bavli concluded that Comparable-Case Guidance improves the accuracy of the awards given. The improved accuracy is seen through a reduction in variability and a minimization of biases. He believes that Comparable-Case Guidance could control unpredictability in a courtroom and could have implications in other fields.

Mr. Gilbert, the moderator, initiated the question and answer portion of the panel. Without mincing words, Mr. Gilbert expressed his opposition to comparable awards. He had numerous reasons for his disagreement. His foremost concerns were for the inherent uniqueness of plaintiffs and cases. However, he had additional concerns: when was the comparable award case tried, where was the case tried, was it a good plaintiff, was it a good lawyer, what were the jury instructions? His concerns cumulated with the fact that he would never want one of his own clients to be given an award based on a case he did not try.

Mr. Gilbert then turned his attention to Professor Hans’ discussion of anchoring. He advised the audience that he would anchor the jury on his ideal number by comparing it to a simpler case scheme, one of a company defaulting on a loan. He would ask the hypothetical jurors whether they would have an issue awarding the amount of the defaulted loan if facts and law supported that anchored number. Then, he would ask them if they would have an issue awarding such a measure in pain and suffering if the facts and the law supported his award amount.

The audience had similar concerns as Mr. Gilbert. Comparable-Case Guidance seemed like a herculean task for some—difficult to implement and impractical in practice. While Comparable-Case Guidance knelt on the precipice of the “extreme,” other information seemed invaluable to the listeners. The panel left the audience sturdy in the conclusion that the process of jury award decision-making is delicate and complex, however, with the tools provided by the panel, assisting juries seemed like a less daunting task.



[*] Staff Editor for the Denver Law Review and 2020 J.D. Candidate at the University of Denver Sturm College of Law.

[1] About Us Page, The Gilbert Law Group, https://www.thegilbertlawgroup.com/about-us/ (last visited on Feb. 15, 2019).

[2] Faculty Page of Sara Gordon, UNLV, https://law.unlv.edu/faculty/sara-gordon (last visited on Feb. 15, 2019).

[3] https://www.lawschool.cornell.edu/faculty/bio_valerie_hans.cfm

[4] Faculty Page of Hillel J. Bavli, SMU, https://www.smu.edu/Law/Faculty/Profiles/Bavli-Hillel-J (last visited on Feb. 15, 2019).

[5] McKune v. Lile, 536 U.S. 24, 33 (2002).

[6] PCAST Releases Report on Forensic Science in Criminal Courts, The White House, https://obamawhitehouse.archives.gov/blog/2016/09/20/pcast-releases-report-forensic-science-criminal-courts (last visited on Feb. 16, 2019) (citing the President’s Council of Advisors on Science and Technology report which recommended “actions to strengthen forensic science and promote its more rigorous use in the classroom.”).