Machine learning models support decisions that affect millions of patients in the U.S. healthcare system in diagnosing illnesses, facilitating triage in emergency rooms, and informing supervision at intensive care units. In such applications, models will often include group attributes such as age, weight, and employment status to capture differences between patient subgroups. Standard techniques to build models with group attributes typically improve aggregate performance across the entire patient population. As a result, however, such models may lead to worse performance for specific groups. In such cases, the model may assign these groups preventable inaccurate predictions that undermine medical care and health outcomes. This project aims to prevent this harm by developing tools to ensure the fair use of group attributes in predictive models. The goal is to ensure that a model uses group attributes in a way that yields a tailored performance benefit for every group.

Currently deployed machine learning models in medicine may exhibit fair use violations that undermine health outcomes. This project mitigates fair use violations at key stages in the deployment of machine learning in medicine: verification, model development, and communication. First, it develops tools to check if a model ensures fair use. These tools include theoretical guarantees that characterize when common approaches to model development produce fair use violations, and statistical tests to verify if a model violates fair use before and during deployment. Second, it develops algorithms for learning models with fair use guarantees. Algorithms will be tailored for salient use cases in medicine, paired with open-source software, and applied to build decision support tools for real-world medical applications. Third, it creates tools to inform key stakeholders (regulators, physicians, and patients) about a model's fair use guarantees. The project draws on machine learning, information theory, optimization, human-centered design, as well as expertise in deploying models in clinical settings. The resulting toolkit for ensuring fair use of group attributes in medicine will be embedded in real-world systems through collaborations with medical researchers and industry.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2021-07-01
Budget End
2024-06-30
Support Year
Fiscal Year
2020
Total Cost
$625,000
Indirect Cost
Name
Harvard University
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02138