This project will implement machine learning algorithms that explores tradeoffs involving three factors: accuracy, fairness, and input data coverage. On the computational side, the project has the potential to bring about a paradigm shift for fair machine learning algorithms while improving uncertainty estimates for those algorithms. The project also includes the development of a human-in-the-loop optimization algorithm to enable humans to dynamically tune the three factors and thereby interact with the algorithm. On the sociological side, this work will provide a cross-cultural study of subject preferences when presented with quantifiable tradeoffs between the three specified factors; it will focus on behavioral differences between individuals in response to their interactive explorations of how the tradeoffs work. The results of this study will serve to complement prior related research that is more qualitative. The results of this study could have substantial impacts on policy making on machine learning algorithms. They could also serve to improve the public's in machine learning algorithms and enable more human-machine teams, which is important in the current era where machine learning algorithms have increasingly become black boxes while being more broadly deployed in many crucial real-life applications.

The goal of this project is to study the trade-off between accuracy, fairness, and data coverage in machine learning algorithms. The research team plans to develop novel hybrid human/machine-learning algorithms with an integrated, optimizable fairness component. The specific objectives are to develop algorithms that are designed to trade-off between three factors: (1) the traditional average error objective that pertains to utility, (2) a minimax error objective that minimize the maximal error occurring to any training example, which pertains to fairness, and (3) the coverage of the algorithm on the input distribution by providing an abstain option that the algorithm can utilize when it is not confident in giving a correct answer. The team will develop their algorithms using saddle point optimization approaches in zero-sum games. A human-in-the-loop optimization algorithm will be designed for humans to dynamically tune the three factors to facilitate interaction with the algorithm. The team will use a hybrid iterative approach to algorithm design and testing that is based in grounded theory, which is widely used in the human and social sciences. Humans will provide directions (more fairness) and specify the groups they want to cover, while the real-valued changes will be automatically computed. In order to better understand of trade-offs on utility, fairness and coverage from a sociological perspective, a broad cross-cultural evaluation will be performed with multiple social-cultural groups in the United States as well as an online platform to reach global users in China and Brazil.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1927564
Program Officer
Frederick Kronz
Project Start
Project End
Budget Start
2019-09-15
Budget End
2021-08-31
Support Year
Fiscal Year
2019
Total Cost
$299,995
Indirect Cost
Name
Oregon State University
Department
Type
DUNS #
City
Corvallis
State
OR
Country
United States
Zip Code
97331