This project will enable physicians to use artificial intelligence (AI) to make more informed medical diagnoses from image data (e.g., X-ray scans). The recent development of AI makes it possible to automate medical diagnosis by using AI to process large amounts of imaging data. However, in reality the adoption of AI in clinics has been slow because most AI functions as "black boxes" -- physicians cannot see why AI makes a given diagnosis or to correct AI when seeing a mistake. As such, there is a lack of trust that prevents AI from being integrated into and enhancing physicians' work. This project develops methods to make AI's diagnosis explainable to physicians while allowing physicians to interact with and control how AI works, such as telling AI to adjust its parameters based on a specific patient's case and teaching new medical knowledge to AI to improve its performance. The outcome of this project will contribute to a new generation of AI-enabled medical diagnostic systems that can collaborate with human physicians by cost-effectively communicating results with physicians while giving them easy and sufficient control over the underlying process.

To achieve these goals, the investigator seeks to expand the interaction bandwidth between physicians and AI by adding a user interface layer that guides a physician to see, ask and understand what AI is doing and enable them to delegate tasks to AI while being able to tell or teach how AI performs those tasks. Specifically, the investigator will conduct studies to understand physicians' need for explanatory information in existing practices, and use this information to co-design with physicians interactive visualization that enables physicians to comprehend the AI's findings via question-and-answer. This project will also investigate methods that enable physicians to express their intents to control the AI behavior (e.g., specifying rules at run-time to complement an existing model's limitation). The project will also enable physicians to convey their domain knowledge to AI to control its long-term behavior for future diagnoses (for example by extracting AI-learnable medical concepts from physicians' labels and annotations on medical imaging data).

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
2047297
Program Officer
Todd Leen
Project Start
Project End
Budget Start
2021-04-01
Budget End
2026-03-31
Support Year
Fiscal Year
2020
Total Cost
$111,330
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095