Explaining machine learning (ML) models have received increasing interest because of their adoption in societally-critical tasks, ranging from health care, to hiring, to criminal justice. It is crucial for the relevant parties, such as decision makers and decision subjects, to understand why a model makes a particular prediction. This proposal argues that explanations represent a communication process. In order to improve the effectiveness of explanations, explanations should be adaptive and interactive based on the subject being explained (subgroups of interest) as well as the target audience (user profiles), whose knowledge and preferences may be evolving. Therefore, this proposal aims to develop adaptive and interactive explanations of machine learning models, which will allow people to better understand the decisions being made for and about them.

This proposal has three key areas of focus. First, this proposal will develop a novel formal framework for generating adaptive explanations which can be customized to account for subgroups of interest and user profiles. Second, this proposal will facilitate the explanations as an interactive communication process by dynamically incorporating user inputs. Finally, this proposal will improve existing automatic evaluation metrics such as sufficiency and comprehensiveness, and develop novel ones, especially for the understudied global explanations. The team will embed these computational approaches in real-world systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2021-02-01
Budget End
2024-01-31
Support Year
Fiscal Year
2020
Total Cost
$375,000
Indirect Cost
Name
University of Chicago
Department
Type
DUNS #
City
Chicago
State
IL
Country
United States
Zip Code
60637