This project will create a general method for value-sensitive algorithm design and develop tools and techniques to help incorporate the tacit values of stakeholders, balance multiple stakeholders' values, and achieve collective goals in the development of an algorithm. The research community has paid increasing attention to the role of human values in algorithm design and development. For example, fairness-aware machine learning research attempts to translate fairness notions into formal algorithmic constraints and develop algorithms subject to such constraints. Despite the mathematical rigor of these approaches, prior research suggests a disconnect between the current discrimination-aware machine learning research and stakeholders' realities, context, and constraints; this disconnect is likely to undermine practical initiatives. Furthermore, studies have suggested that there are often tensions among a diverse set of values relevant to the design of the algorithm. A new general method will be developed in the context of Redesigning Wikipedia's Objective Revision Evaluation Service (ORES), a machine learning-based service designed to generate real-time predictions on edit quality and article quality, which will benefit vast numbers of people who consume the Wikipedia content either directly or indirectly through other applications.

There are four major goals of this research. The first is to articulate and demonstrate a general method for creating algorithmic systems that respect and balance stakeholders' values. The second goal is to create techniques for generating an algorithmic system's value report and explaining the value trade-offs. The third goal to create, deploy, and evaluate social and technical innovations to address fundamental trade-offs between different values. The final goal is to design and implement improvements to ORES, which will improve a wide variety of applications that rely on ORES, and Wikipedia's content and community as a whole. For an example of the kinds of problems that must be solved, quality control algorithms that prioritize efficiency in deleting low quality content incur the risk of undermining the motivation of contributors in peer production communities, particularly new contributors who are still learning how to contribute. To date, however, little work has been conducted to create solutions to address tensions and trade-offs between different values in algorithm design. The research will be performed through multiple studies by stepping through the process for a diverse set of tasks, each of which will allow interaction with multiple stakeholders, who have different (and perhaps conflicting) values.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

National Science Foundation (NSF)
Division of Information and Intelligent Systems (IIS)
Standard Grant (Standard)
Application #
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Minnesota Twin Cities
United States
Zip Code