A number of computing systems include intelligent components, employing algorithms or computational models that process data and make decisions or recommendations based on the data. These models often function as 'black boxes': their internal mechanisms are often not visible to their users, and when they are, they are usually in a form that requires training in computer science to understand and modify them. They underlie many socially useful functions, ranging from risk scoring to autonomous vehicles, which makes it important for their users -- who are often experts in the specific domain of use, but not in computing -- to be able to use and improve the models. This project's goal is to develop techniques that will help human users who are not trained computer scientists interact with these models using concepts and interaction techniques based on the specific domain and problem at hand rather than the underlying algorithms. This will lead to scientific advances in the broader area of explainable artificial intelligence, as well as increasing the transparency and usefulness of intelligent systems that affect everyday life. The project will also support a number of course development and outreach activities and develop freely available toolkits for other researchers and developers to use.

The long-term goal of this project is to computationally generate visualizations to reveal how intelligent systems work to non-computing knowledge workers. To progress toward this goal, the project team will focus on three major research activities. First, they will conduct a bottom-up, four-pass analysis of literature spanning multiple research communities (Artificial Intelligence, Machine Learning, Computer Science Education, Human-Computer Interaction, and Visualization) to summarize key design dimensions of visualizing intelligent systems. Second, they will collect empirical evidence of how these systems are currently used and understood in the medical domain by observing how medical professionals interact with data and make sense of domain-specific intelligent systems. Third, based on the first two activities, they will develop computational methods to illustrate how these systems work by visualizing how user-sampled data is transformed into final results, while providing controls for domain experts to interact with the model.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-02-15
Budget End
2021-12-31
Support Year
Fiscal Year
2018
Total Cost
$200,460
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095