All humans-no matter how intelligent, egalitarian, or well-intentioned-are susceptible to cognitive biases in the way they make decisions and judge others. Although these biases can operate unintentionally in opposition to one's conscious intentions, personal beliefs, and objective data, they may unwittingly perpetuate social inequalities. Unexplained disparities in R01 funding outcomes by race and gender have raised concern about bias in NIH peer review. This Transformative R01 will examine if and how implicit (i.e., unintentional) bias might occur in R01 peer review through the following three Specific Aims:
Specific Aim #1. Identify the extent to which investigator characteristics influence the words and descriptors chosen by R01 peer-reviewers and how text relates to assigned scores. We will validate positive and negative grant evaluation word categories, analyze the text of a national sample of R01 reviews, and compare the grant review text for different investigator characteristics. We hypothesize that categories of words and descriptors will differ in ways that suggest implicitly different evaluation standards by applicant race and gender, even when application scores and funding outcomes are similar.
Specific Aim #2. Determine whether investigator race, gender, or institution causally influences the review of identical proposals. We will conduct a randomized, controlled study in which we manipulate characteristics of a grant principal investigator (PI) to assess their influence on grant review outcomes. We will request donations of actual funded R01s and, within each grant, manipulate the PI's gender, race, or home institution. We will then invite reviewers in the appropriate discipline to review the proposals, and we will analyze written reviews and scores. We hypothesize that investigator variables will significantly influence scores and review text such that grants attributed to higher status groups (male, White, prestigious institution) will obtain better scores and text will suppor implicitly different standards of excellence.
Specific Aim #3. Examine how interactional patterns among study section members promote receptivity and resistance to discussion topics and associated grant applicants. In audio- and videotapes of constructed study sections, we will investigate the real-time social interactional processes in the discussions of R01 proposals. We will employ conversation analysis to examine the delivery of initial rankings and their rationales, topic development, and the processes through which final rankings are negotiated. This research is innovative because it examines for the first time the complexities of potential bias in NIH peer review. The potential impact is threefold;this research will 1) discover whether certain forms of cognitive bias are or are not consequential in R01 peer review, 2) determine whether quantitative text analysis is a useful measure of implicit bias, and 3) describe and label real-tim grant reviewer interactional patterns. Taken together, the results of our research could set the stage for transformation in peer review throughout NIH.
NIH research grants are the engine for innovation and discovery with the goal of improving human health at the individual patient and population level. Peer-review is fundamental to the grant award process, ultimately determining what research will be conducted. This multi-method research project will explore whether and how unintentional cognitive bias triggered by characteristics of the applicant could influence R01 grant review outcomes.
|Kolehmainen, Christine; Carnes, Molly (2018) Who Resembles a Scientific Leader-Jack or Jill? How Implicit Bias Could Influence Research Grant Funding. Circulation 137:769-770|
|Pier, Elizabeth L; Brauer, Markus; Filut, Amarette et al. (2018) Low agreement among reviewers evaluating the same NIH grant applications. Proc Natl Acad Sci U S A 115:2952-2957|
|Forscher, Patrick S; Mitamura, Chelsea; Dix, Emily L et al. (2017) Breaking the prejudice habit: Mechanisms, timecourse, and longevity. J Exp Soc Psychol 72:133-146|
|Raclaw, Joshua; Ford, Cecilia E (2017) Laughter and the Management of Divergent Positions in Peer Review Interactions. J Pragmat 113:1-15|
|Sheridan, Jennifer; Savoy, Julia N; Kaatz, Anna et al. (2017) Write More Articles, Get More Grants: The Impact of Department Climate on Faculty Research Productivity. J Womens Health (Larchmt) 26:587-596|
|Carnes, Molly; Johnson, Paula; Klein, Wendy et al. (2017) Advancing Women's Health and Women's Leadership With Endowed Chairs in Women's Health. Acad Med 92:167-174|
|Magua, Wairimu; Zhu, Xiaojin; Bhattacharya, Anupama et al. (2017) Are Female Applicants Disadvantaged in National Institutes of Health Peer Review? Combining Algorithmic Text Mining and Qualitative Methods to Detect Evaluative Differences in R01 Reviewers' Critiques. J Womens Health (Larchmt) 26:560-570|
|Kaatz, Anna; Carnes, Molly; Gutierrez, Belinda et al. (2017) Fair Play: A Study of Scientific Workforce Trainers' Experience Playing an Educational Video Game about Racial Bias. CBE Life Sci Educ 16:|
|Carnes, Molly; Bairey Merz, C Noel (2017) Women Are Less Likely Than Men to Be Full Professors in Cardiology: Why Does This Happen and How Can We Fix It? Circulation 135:518-520|
|Mitamura, Chelsea; Erickson, Lynnsey; Devine, Patricia G (2017) Value-Based Standards Guide Sexism Inferences for Self and Others. J Exp Soc Psychol 72:101-117|
Showing the most recent 10 out of 21 publications