The goal of this project is to use machine learning to understand and mitigate bias in interviewer evaluations. The researchers will do so by examining gender differences in expressed behavior during interviews; they will focus on behaviors that can lead to different interviewer evaluations. More specifically, they will use unsupervised video interviews to assess gender differences in terms of signaling behavior, such as facial expressions and language style; they will do so by studying how these differences are perceived by human interviewers in their indexing of personality and cognitive ability. For their research design, they rely on a large sample of men and women interviewees matched on standardized test scores of General Mental Ability (GMA), self-reported personality ratings, age, race, and ethnicity. The research will provide new opportunities for interdisciplinary training of students with an emphasis on recruiting underrepresented groups to work on this project. This research will provide information and guidance for developing bias-free machine-learning systems for personnel selection. By identifying and accounting for behavioral differences between genders that lead to predictive bias in machine learning selection systems, the proposed research will advance our understanding of the differences in gender expression of behaviors, methods for dealing with bias in machine learning, and bias reduction strategies in personnel selection and assessment.

This project focuses on two scenarios of assessing interviewee attributes to train machine-learning algorithms: Algorithms trained on interviewee information (GMA test scores and self-reported personality), and algorithms trained on observer (interviewer) assessment of attributes. The matched sample ensures machine-learning model differences are not based on difference in underlying sample attributes. The two main goals of the project are: To understand gender differences in expressed behaviors and interviewer ratings (trained and untrained interviewers) using machine-learning techniques, and then to use that understanding to reduce predictive discrepancies between men and women by accounting for it in the models. The findings will have several significant societal impacts. They will improve our ability to predict and mitigate biases, bring to light new methodologies for mitigating bias in machine learning; and (provide strategies and tools for reducing social inequalities in employment outcomes. This research also has potential to advance both social science and machine learning. It will provide insights that can advance our understanding of social role theory by uncovering objective differences in behavior exhibited by men and women and how these behaviors are interpreted differently. Further, it can advance machine learning in developing new techniques for addressing bias at all stages of the machine-learning pipeline from instance selection and weighting, to model fitting, and then to model selection and optimization.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1921111
Program Officer
Frederick Kronz
Project Start
Project End
Budget Start
2019-09-15
Budget End
2021-08-31
Support Year
Fiscal Year
2019
Total Cost
$152,522
Indirect Cost
Name
Purdue University
Department
Type
DUNS #
City
West Lafayette
State
IN
Country
United States
Zip Code
47907