As software permeates our personal lives, corporate world, and bureaucracy, more and more of our critical decisions are being delegated to opaque algorithms. These algorithms have thus become powerful arbitrators of a range of significant decisions with far-reaching societal impact. However, an algorithm carrying out a sensitive task could potentially discriminate ? advertently or inadvertently ? against certain groups. The overarching goal of this project is to enable automated reasoning about fairness of such decision-making programs. The results from this project are integrated in teaching curricula, thus raising awareness about fairness in automated decision making. The project trains graduate students in a unique and inter-disciplinary environment.
This project investigates various notions of fairness, inspired by the law and recent works in the area, and casts them through the formal lens of program verification. The project develops techniques for certifying that a program is fair under a given population, and techniques for automatically repairing unfair programs to make them fair. From a technical viewpoint, this project develops novel probabilistic program verification and synthesis technologies that, while focusing on the problem of fairness, are general and expand the reach of current technologies. Specifically, the project develops novel probabilistic verification techniques based on volume computation. Furthermore, the project develops program repair and debugging techniques for probabilistic programs.