In Daubert v. Merrell Dow Pharmaceuticals, the Supreme Court clarified that trial judges were responsible for serving as evidentiary gatekeepers for scientific evidence. When judges fail at this gatekeeping role and admit unreliable expert testimony, the task of identifying and assessing flawed scientific testimony becomes the job of attorneys during cross-examination and jurors during trial. The purpose of the proposed studies is to determine whether judges, attorneys, and jurors are sensitive to threats to scientific validity, to examine the effectiveness of scientifically informed cross-examinations for educating jurors about sophisticated validity threats, and to assess the ability of attorneys and judges to develop questions that would elicit information about scientific validity.

Study 1 examines the utility of cross-examination to educate jurors about threats to scientific validity. Jurors will view a videotaped trial, render a verdict, and provide evaluations of the expert's testimony. In Study 2, the researchers will examine whether judges and attorneys are capable of fulfilling Daubert requirements in their evaluation of scientific quality and the development of questions that might highlight concepts of scientific validity or reliability. Judges and attorneys will read a case summary and develop questions intended to assess the reliability and validity of an expert's testimony.

This research has the potential to increase our understanding of the Court's assumptions in Daubert. Further, this research will address a critical issue regarding the effectiveness of Daubert?s safeguards that has not been previously addressed, which is to examine whether attorneys and judges have developed the skills necessary to elicit information about scientific reliability and validity during questioning and cross-examination. Further, knowledge of whether cross-examination can be an effective safeguard and whether attorneys and judges are capable of developing effective cross-examination questions may lead to changes in continuing legal education about scientific evidence.

Project Report

Expert testimony is intended to help jurors understand scientific, technical, or otherwise complex evidence. Expert testimony can be controversial, however, because of concerns about its quality. Although the Supreme Court clarified judges’ role as gatekeepers of scientific evidence, judges may lack the skills necessary to evaluate scientific quality (Gatowski et al., 2001; Kovera & McAuliff, 2000) and may admit testimony and opinions derived from invalid science into evidence (Kovera & McAuliff, 2000). When judges fail to recognize flawed science, attorneys may aid the judge in determining scientific quality by filing a motion to exclude the testimony. For attorneys to successfully assist judges, attorneys must be sufficiently knowledgeable about scientific methodology to identify threats to validity and reliability, but attorneys may fail to identify methodological problems in scientific studies (Kovera & McAuliff, 2002). Jurors also struggle with scientific concepts and evidence evaluation (Levett, 2008, 2009; McAuliff & Duckworth, 2010; McAuliff & Kovera, 2008; McAuliff et al., 2009). Although cross-examination has demonstrated promise for educating jurors about simple threats to scientific validity (Austin & Kovera, under review), questions remain unanswered about the ability of cross-examination to increase juror sensitivity to variations in scientific methodology when the threat is more complex. The PIs conducted two studies to empirically examine whether cross-examination helps jurors evaluate scientific evidence In the first study, they examined whether judges and attorneys are capable of evaluating the quality of scientific evidence and of developing questions that could help jurors evaluate scientific validity or reliability. Ninety-five attorneys and 111 judges read a fact pattern from a civil trial that included expert testimony about the intelligence of the plaintiff. The PIs varied both the validity and reliability of the intelligence test the expert administered to the plaintiff. Psychological tests can vary in both validity and reliability. Validity refers to the how well an instrument measures what it intends to measure (e.g., I.Q. tests measure cognitive ability, personality tests measures personality characteristics, scales measures weight) and reliability refers to consistency of measurement. According to the trial testimony, the test was either valid and reliable, invalid due to experimenter bias but reliable, or the test had relatively low reliability indices on the full scale internal consistency score, the test/retest score, and the inter-observer reliability scores. The results suggest that judges are insensitive to variations in scientific reliability and validity. Judges were equally likely to admit good scientific evidence as they were bad scientific evidence and did not provide higher ratings of scientific quality for valid and reliable scientific testimony than they did for invalid or unreliable testimony. Thus, judges may not serve as effective gatekeepers of scientific evidence. Attorneys provided lower ratings of scientific quality when the testimony was unreliable but did not provide lower ratings of scientific quality when the testimony was invalid. Moreover, attorneys reported that they would be more likely to move to exclude unreliable testimony than they would invalid or valid testimony. Although attorneys provided lower ratings of scientific quality for unreliable testimony, they did not develop cross-examination questions that addressed issues of reliability to help educate jurors. Attorneys did, however, formulate questions about scientific validity. These findings suggest that attorneys may serve as an effective buffer against the admission of unreliable expert testimony but that jurors may be exposed to invalid scientific testimony because neither judges nor attorneys identified these scientific flaws. These data also suggest that although jurors may be exposed to invalid scientific testimony, attoneys may be better about asking questions about validity. In the second study, the PIs evaluated the ability of cross-examination to educate jurors about a sophisticated threat to scientific validity (experimenter bias). Jurors watched a simulated videotaped trial based on the testimony from an actual case Hoffman v. Board of Education (1978). During the trial, jurors heard testimony from an expert about an intelligence test she administered the plaintiff that varied in similar ways as Study 1 (valid and reliable, invalid but reliable, unreliable). In addition, the PIs manipulated cross-examination type. Half of the sample heard a scientifically informed cross-examination designed to help jurors understand concepts of reliability and validity. The other half did not hear the scientifically informed questions. Although the manipulation checks suggested that jurors were generally able to recognize our manipulations of validity and reliability, jurors remained insensitive to variations in experimenter bias and reliability even with scientifically informed cross-examination. If jurors cannot be trained to evaluate scientific quality through cross-examination, attorneys’ ability to expose methodological issues relevant to validity and reliability may not matter. In this scenario, the judge’s role as the gatekeeper for admissible evidence (i.e., to keep unreliable evidence from being presented at trial) becomes significantly more important. Future research should address how to train judges to reason scientifically and should also continue to evaluate methods intended to train jurors to reason scientifically.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
1155251
Program Officer
susan sterett
Project Start
Project End
Budget Start
2012-04-01
Budget End
2014-03-31
Support Year
Fiscal Year
2011
Total Cost
$14,934
Indirect Cost
Name
CUNY John Jay College of Criminal Justice
Department
Type
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10019