The increasing impact of AI technologies on real applications has subjected these to unprecedented scrutiny. One of the major concerns is the extent to which these technologies reproduce or exacerbate inequity, with a number of high-profile examples, such as bias in recidivism prediction, illustrating the potential limitations of, and eroding trust in, AI. While approaches have emerged that aim to guarantee some form of fairness of AI systems, most are restricted to relatively simple prediction problems, without accounting for specific use cases of predictions. However, many practical uses of predictive models involve decisions that occur over time, and that are obtained by solving complex optimization problems. Moreover, few general approaches exist even for ascertaining equitable outcomes of dynamic decisions, let alone providing guidance for ensuring equity in such settings. To address these limitations, this project is developing a framework called FairGame for the development and certification of fair autonomous decision-making algorithms. This project will also develop new courses and course modules at Washington University, take a lead role in a new interdisciplinary program in Computational and Data Sciences, seek to inform policymakers and regulators about computational approaches to ensuring fairness, and work to broaden participation in computing through, for example, the Missouri Louis Stokes Alliance for Minority Participation.

This project develops an audit-driven game theoretic framework for the development and certification of fair autonomous decision-making algorithms. FairGame features a decision module that computes a decision policy, and a pseudo-adversarial auditor providing feedback to the decision module about possible fairness violations, as well as providing fairness certification. The FairGame framework conceptually resembles the well-known actor-critic methods in reinforcement learning; however, unlike actor-critic methods, it enforces that the auditor has only query access to the policy, and, conversely, the decision module can only query the auditor (which provides feedback on the decisions). Different notions of fairness and efficacy can be modeled as different types of two-player games between the decision module and the auditor. This project will study foundational issues in this framework, including (a) the extent to which (probabilistically) certifying fairness in a black-box setting is possible, (b) practical algorithms for auditing, (c) iterative approaches for ensuring fair-decisions given a black-box access to an auditor, including policy gradient methods and Bayesian optimization, and (d) appropriate fairness and efficacy criteria, and (e) whether these criteria can satisfy different regulatory models, such as a requirement of “meaningful information about the logic” or legally imposed requirements of nondiscrimination. The work will be informed by the real policy challenge of developing fair algorithms for provision of services to homeless households, and provide feedback in this domain to key stakeholders.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-01-01
Budget End
2022-12-31
Support Year
Fiscal Year
2019
Total Cost
$444,145
Indirect Cost
Name
Washington University
Department
Type
DUNS #
City
Saint Louis
State
MO
Country
United States
Zip Code
63130