Machine learning is now a key aspect behind the smart software that is present in many aspects of our daily lives, including face recognition in cameras, chatbots that can answer questions, and speech recognition and language translation on our mobile phones. The prediction models that drive such software are created automatically by machine learning algorithms trained on large amounts of historical data. One issue with these models is that they are often black-box in nature and difficult for humans to understand. In particular, they can be overconfident in their predictions and are not always able to recognize their own limitations. As these types of machine learning models move into more critical tasks, such as autonomous driving and medical diagnosis, it is becoming increasingly important to understand the limitations of such models in real-world practical situations. This research project will address these issues by investigating new mathematical and algorithmic approaches that can improve our ability to assess the performance and confidence of black-box prediction models, particularly when the models are operating in new environments that they have not encountered before. The outcomes of this research will have the potential to significantly improve the reliability and usability of machine learning systems across a broad range of areas such as medicine, transportation, business, and consumer applications.

This project focuses on an aspect of explainable AI concerned with enabling black-box machine learning models (specifically those based on classification and regression) to produce confidence statements about their predictions. This project is pursuing novel methods for understanding the predictions of these models to overcome the implicit overconfidence that otherwise black-and-white, in-or-out classification outcomes can imply. This research project will bring together expertise from cognitive science and computer science in the context of two broad themes. The first theme will focus on developing accurate and robust algorithms that can learn how much confidence to place in a black-box model's predictions. The researchers will investigate new Bayesian calibration methods and develop a broad framework for robust and accurate online assessment of the capabilities of black-box prediction models. The second theme will leverage the algorithmic advances from the first theme to develop new approaches to confidence assessment that can improve the effectiveness of the combined efforts of a black-box predictor and a human decision-maker. This work will in turn provide the basis for trading off prediction accuracy and human effort, and allow for development of techniques that leverage accurate confidence estimates to reduce algorithm aversion and increase trust on the part of the human. Engaging the interest of a broader community will also be a key aspect of the project, with a focus on workshops and hackathons involving under-represented community college students in Southern California, to address broad-ranging questions related to the use of artificial intelligence and machine learning techniques in our everyday world.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2019
Total Cost
$1,199,898
Indirect Cost
Name
University of California Irvine
Department
Type
DUNS #
City
Irvine
State
CA
Country
United States
Zip Code
92697