Lung cancer is the most common cause of cancer death in both men and women in the United States. Lung cancer screening with low-dose computed tomography (CT) has been shown to reduce lung cancer mortality. However, current radiology practice still suffers from (1) high rates of missed tumors and (2) imprecise lung nodule characterization (malignant vs. benign). Artificial intelligence (AI) based computer aided diagnosis (CAD) systems have helped radiologists to reduce the missed-tumor rates moderately, but have not been widely adopted for three key reasons: lack of efficiency, lack of real-time collaboration, and lack of interpretability. The overall goal of this proposal is to create radiologist-centered artificial intelligence algorithms that are both interpretable and collaborative and to demonstrate their improved efficacy via lung cancer screening experiments. The central hypothesis of this effort is that the creation of an AI based virtual cognitive assistant (VCA) will provide a better understanding of cognitive biases while offering interpretable feedback to radiologists for an improved screening experience with higher diagnostic accuracy, reproducibility, and efficiency.
Specific aims of the proposal are three-fold.
Aim 1 : To develop an eye-tracking platform that offers a realistic radiology reading room experience while extracting gaze patterns from radiologists. This will facilitate addressing the problem of true collaboration between radiologists and CAD. Radiologists will perform their screening without any constraints (e.g., wearing glasses) while their gaze patterns and other human-computer interaction events are tracked, processed, and stored in real time.
Aim 2 : To develop an automated real-time collaborative system involving a developed VCA and the radiologist to synergistically improve detection and diagnostic performances. Using deep learning (DL) algorithms, the VCA will embody a powerful visual attention model to represent radiologists? gaze, visual search, and fixation patterns, and will be composed of a detection component and a diagnostic component. A deep reinforcement learning algorithm will enable communication between the VCA and the radiologist. Lastly, a DL-based segmentation component will, on the fly, enable the VCA to derive and visualize quantitative measures (HU statistics, volume, long/short axes lengths, etc.) and overlay them along with the tumor classification label (benign/malignant) and its probability in real time.
Aim 3 : To evaluate the efficacy of the proposed VCA via lung cancer screening experiments involving six radiologists from two institutes (University of Pennsylvania and NIH) at different expertise levels. The proposed VCA is a first-of-a-kind-system to exploit the synergy between powerful DL technology and experts (humans) to attempt boost clinical diagnostic performance of radiologists, unlike passive DL techniques that learn from labeled data. The outcome of this research are expected to be transformative by providing deep insights for re-designing current CAD systems to truly collaborate with radiologists, instead of acting as second opinion tools for them or replacing them, and by ultimately further reducing lung cancer-related deaths.

Public Health Relevance

Although artificial intelligence (AI) based computer aided diagnosis systems have been shown to be useful in lung cancer screening and diagnosis, current radiology practice still suffers from (1) high rates of missed tumors and (2) imprecise lung nodule characterization (malignant vs. benign). The goal of this proposal is to create novel AI algorithms, called radiologist-centered AI, which tightly integrate in real time the radiologist's reading pattern and AI via an eye-tracking device and to demonstrate their improved efficacy via lung cancer screening experiments using low-dose CT scans. The outcome of this research will be a virtual cognitive assistant (VCA) for radiologists, truly actively collaborating with them instead of passively acting as second opinion tools or replacing them, and ultimately further reducing lung cancer-related deaths.

Agency
National Institute of Health (NIH)
Institute
National Cancer Institute (NCI)
Type
Research Project (R01)
Project #
1R01CA240639-01A1
Application #
9971649
Study Section
Emerging Imaging Technologies and Applications Study Section (EITA)
Program Officer
Zhang, Yantian
Project Start
2020-07-01
Project End
2024-05-31
Budget Start
2020-07-01
Budget End
2021-05-31
Support Year
1
Fiscal Year
2020
Total Cost
Indirect Cost
Name
University of Central Florida
Department
Engineering (All Types)
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
150805653
City
Orlando
State
FL
Country
United States
Zip Code
32826