This project advances the potential for Machine Learning (ML) to serve the social good by improving understanding of how to apply ML methods to high-stakes, real-world settings in fair and responsible ways. Government agencies and nonprofits use ML tools to inform consequential decisions. However, a growing number of academics, journalists, and policy-makers have expressed apprehension regarding the prominent (and growing) role that ML technology plays in the allocation of social benefits and burdens across diverse policy areas, including child welfare, health, and criminal justice. Many of these decisions impart long-lasting effects on the lives of their subjects. When applied inappropriately, they can harm already vulnerable and historically-disadvantaged communities. These concerns have given rise to a growing number of research efforts aimed at understanding disparities and developing tools that aim to minimize or mitigate them. To date, these efforts have been limited in their impact on real-world applications by focusing too narrowly on abstract technical concepts and computational methods at the expense of addressing the decisions and societal outcomes these methods affect. Such efforts also commonly fail to situate the work in real-world contexts or to draw input from the communities most affected by ML-assisted decision-making. This project seeks to fill these gaps in current research and practice in close partnership with government agencies and nonprofits.

This project draws upon disciplinary perspectives from computer science, statistics, and public policy. Its first aim explores the mapping between policy goals and ML formulations. This aim focuses on what facts must be consulted to make coherent determinations about fairness, and anchors those assessments of fairness to near- and long-term societal outcomes for people subject to decisions. This work offers practical ways to engage with partners, policymakers, and affected communities to translate desired fairness goals into computationally tractable measures. Itssecond aim investigates fairness through the entire ML decision-support pipeline, from policy goals to data to models to interventions. It explores how different approaches to data collection, imputation, model selection, and evaluation impact the fairness of resulting tools. The project’s third aim is concerned with modeling the long-term societal outcomes of ML-assisted decision-making in policy domains, ultimately to guide a grounded approach to designing fairness-promoting methods. The project’s over-arching objective is to bridge the divide between active research in fair ML and applications in policy domains. It does that through innovative teaching and training activities, broadening the participation of under-represented groups in research and technology design, enhancing scientific and technological understanding among the public, practitioners, and legislators, and delivering a direct positive impact with partner agencies.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2021-04-01
Budget End
2024-03-31
Support Year
Fiscal Year
2020
Total Cost
$375,000
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213