Recent advances in deep learning have made dramatic progress in solving basic perceptual tasks such as speech recognition and object detection. To pave the way for the many human-centered applications that these advances might enable, in healthcare for instance, it is important to move beyond classification problems: to think of machine learning systems as producing not just category predictions, but also the reasons for them. Moreover, these patterns of reasoning need to be comprehensible to humans. To enable this, this project will focus on the exchange of knowledge between humans and machine learning systems and how such exchange of knowledge beyond data can lead to better predictions that are also human-interpretable. The project will result in technological advances that will have the potential to significantly impact the usability of machine learning in human-facing applications.

The technical aims of this project are developed along two broad themes. The first addresses the question, "How can we involve human feedback in the machine learning process to create succinct models that are interpretable and generate predictions that are explainable?" By enabling humans to provide rich feedback in the form of rules-of-thumb as relational knowledge, the project aims to derive succinct interpretable machine learning models that are amenable to simple explanations that are more compatible with the causal world-view of humans. To enhance the interpretability of machine learning, the project will further explore how human feedback based on relational knowledge can be leveraged to reduce the size of data sets required to train accurate models. The second addresses the question, "How can we encode and exploit relational information in deriving interpretable and explainable models for reasoning?" The project will explore the encoding of relational knowledge in both vector spaces and logical models and further investigate how relational knowledge can be used for analogical reasoning, semantic understanding, and relational queries.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2019
Total Cost
$498,930
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095