As software permeates our personal lives, corporate world, and bureaucracy, more and more of our critical decisions are being delegated to opaque algorithms. These algorithms have thus become powerful arbitrators of a range of significant decisions with far-reaching societal impact. However, an algorithm carrying out a sensitive task could potentially discriminate ? advertently or inadvertently ? against certain groups. The overarching goal of this project is to enable automated reasoning about fairness of such decision-making programs. The results from this project are integrated in teaching curricula, thus raising awareness about fairness in automated decision making. The project trains graduate students in a unique and inter-disciplinary environment.

This project investigates various notions of fairness, inspired by the law and recent works in the area, and casts them through the formal lens of program verification. The project develops techniques for certifying that a program is fair under a given population, and techniques for automatically repairing unfair programs to make them fair. From a technical viewpoint, this project develops novel probabilistic program verification and synthesis technologies that, while focusing on the problem of fairness, are general and expand the reach of current technologies. Specifically, the project develops novel probabilistic verification techniques based on volume computation. Furthermore, the project develops program repair and debugging techniques for probabilistic programs.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
1704117
Program Officer
Nina Amla
Project Start
Project End
Budget Start
2017-06-01
Budget End
2021-05-31
Support Year
Fiscal Year
2017
Total Cost
$1,014,107
Indirect Cost
Name
University of Wisconsin Madison
Department
Type
DUNS #
City
Madison
State
WI
Country
United States
Zip Code
53715