Understanding and enhancing visual search performance in complex scenes Cancer screening saves lives (e.g. National Lung Screening Trial Research, 2011). Every day, radiologists are faced with difficult, time-consuming visual search tasks like in mammography and lung cancer screening. Oftentimes signs of cancer, e.g. lung nodules or little abnormalities in a breast, are very hard to find against heterogeneous backgrounds and obscured by overlapping tissues. Missing these signs of cancer can result in wrong diagnoses with life or death consequences. It is therefore of key interest to identify and tackle the problems posed in these crucial search tasks. The proposed studies aim to improve search performance in cancer screening in two ways: First, by enhancing the visibility of lung nodules embedded in 3D volumetric datasets. Radiologists usually search for lung nodules by scrolling through stacks of chest CT. Nodules are roughly spherical features, spanning a few slices in a CT stack. Anecdotally, experts report that the way that lung nodules 'pop'in and out of the changing view is a signal to their presence. We propose an innovative approach using saliency algorithms that harness this signal to enhance chest CTs in a way that directs attention to crucial regions of a scene. Second, we aim to improve search performance by understanding and utilizing non-selective, 'gist'-like processing of mammograms. Recent work has shown that expert radiologists can detect a global signal in mammograms that allows for above-chance categorization of normal and abnormal breasts after very short exposures to the stimulus. We will first train novices to become experts in closer approximation to the medical tasks. As expertise develops, we will investigate two different neural correlates that might evolve in the course of training using electroencephalography (EEG), namely the P300 and the N2pc. In other settings, these measures can signal attentional selection either across (P300) or within (N2pc) briefly presented images. We will adapt a new method that exploits machine learning for real-time decoding of brain signals. It allows neural signatures, elicited in response to a sequence of images, to be used to rank those images in order of their implicit 'interest'to the viewer. We hypothesize that this information can be fed back to the observer/radiologist as a source of information that might, for example, suggest that an image or region deserves more scrutiny. The main goals of this proposal are therefore to understand the guidance of attention in complex visual search tasks and to apply this knowledge to improvements in clinically relevant search tasks like cancer screening.

Public Health Relevance

Understanding and enhancing visual search performance in complex scenes Early detection of cancer can save lives, but error rates in cancer screening, both misses and false alarms, are still too high. We propose two ways of aiding cancer screening by radiologists: 1) We will enhance search displays by using saliency algorithms to direct attention to crucial regions of a scene. 2) We will utilize neural signals elicited by non-selective, 'gist'-like processing of medical images as a novel support for evaluation of mammograms.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
5F32EY022558-02
Application #
8580179
Study Section
Special Emphasis Panel (ZRG1-F02B-M (20))
Program Officer
Agarwal, Neeraj
Project Start
2012-12-01
Project End
2015-11-30
Budget Start
2013-12-01
Budget End
2014-11-30
Support Year
2
Fiscal Year
2014
Total Cost
$55,670
Indirect Cost
Name
Brigham and Women's Hospital
Department
Type
DUNS #
030811269
City
Boston
State
MA
Country
United States
Zip Code
02115
Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L H (2014) Seek and you shall remember: scene semantics interact with visual search to build better memories. J Vis 14:10
Võ, Melissa L-H; Wolfe, Jeremy M (2013) Differential electrophysiological signatures of semantic and syntactic scene processing. Psychol Sci 24:1816-23