Modern civilization asks trained experts to perform a range of critical visual search tasks such as satellite image interpretation, airport baggage screening, and quality monitoring in manufacturing. In medicine, visual search through complex medical images is critical to detection and diagnosis of disease (e.g. screening for breast or lung cancer, evaluation of stroke, detection of internal injuries after trauma). The important search tasks of modern civilization are quite varied but they share significant characteristics. They are demanding tasks, carried out by experts who are asked to perform with high accuracy. Very significant resources are devoted to these tasks and very substantial costs accompany errors. These are quite literally matters of life and death. Nevertheless, committed professionals do not perform those tasks as well as would be desired or expected. Why are their error rates as high as they are? Search tasks are not new. Animals have always searched for food and mates. This proposal tests the hypothesis that the processes of search that served us well as visual foragers in the natural world may not serve as well when we search for tumors or fractures. Errors can arise when natural mechanisms of search meet the artificial demands of medical image perception. Errors can be reduced by developing and applying an understanding of the human search engine. Our basic research strategy is to bring problems from the clinical setting into the lab where they can be extensively studied in non-expert populations and then to use those basic research results to generate highly focused hypotheses that can be tested in the clinical setting. There are three specific aims: 1) Many modern medical imaging devices create 3D volumes of data (e.g. CT and MRI).
Aim 1 tests the hypothesis that the presentation of 3D data changes the sources of errors in medical image perception (e.g. by increasing the chance that a specific region of a 3D dataset will not be attended). 2) Many medical image perception tasks are """"""""foraging"""""""" tasks where observers searching for multiple targets in displays extended in time and space. Foraging has been a topic of interest in the animal behavior literature but only minimally in the human visual search literature. When is it time to stop foraging and move on? Aim 2 tests the hypothesis that the foraging rules that work in the natural world may be a source of errors in the clinic. 3) The third aim is to fuse models of visual search and models of medical image perception into a more comprehensive model. Search models from the lab have dealt with search tasks that last for about a second. Medical image perception models are concerned with tasks that last for minutes or more.
Aim 3 tests the hypothesis that basic search models can be extended to this longer time frame with changes that will be constrained by the research proposed here.

Public Health Relevance

Errors in medical image perception tasks like screening for breast cancer are distressingly high. In a substantial proportion of cases, the sign of disease is visible but, nevertheless, missed by a well-intentioned, well-trained expert. Since humans will be performing these tasks for the foreseeable future, the goal of this research is to understand the perceptual and attentional factors that produce these errors so that counter-measures can be devised to prevent them.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY017001-06
Application #
8258718
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2005-12-01
Project End
2014-04-30
Budget Start
2012-05-01
Budget End
2013-04-30
Support Year
6
Fiscal Year
2012
Total Cost
$438,668
Indirect Cost
$188,668
Name
Brigham and Women's Hospital
Department
Type
DUNS #
030811269
City
Boston
State
MA
Country
United States
Zip Code
02115
Chin, Michael D; Evans, Karla K; Wolfe, Jeremy M et al. (2018) Inversion effects in the expert classification of mammograms and faces. Cogn Res Princ Implic 3:31
Wolfe, Jeremy M; Utochkin, Igor S (2018) What is a preattentive feature? Curr Opin Psychol 29:19-26
Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M (2018) Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search. Mem Cognit 46:43-57
Kok, Ellen M; Aizenman, Avi M; Võ, Melissa L-H et al. (2017) Even if I showed you where you looked, remembering where you just looked is hard. J Vis 17:2
Wolfe, Jeremy M; Alaoui Soce, Abla; Schill, Hayden M (2017) How did I miss that? Developing mixed hybrid visual search as a 'model system' for incidental finding errors in radiology. Cogn Res Princ Implic 2:35
Cunningham, Corbin A; Drew, Trafton; Wolfe, Jeremy M (2017) Analog Computer-Aided Detection (CAD) information can be more effective than binary marks. Atten Percept Psychophys 79:679-690
Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M (2017) One visual search, many memory searches: An eye-tracking investigation of hybrid search. J Vis 17:5
Aizenman, Avi; Drew, Trafton; Ehinger, Krista A et al. (2017) Comparing search patterns in digital breast tomosynthesis and full-field digital mammography: an eye tracking study. J Med Imaging (Bellingham) 4:045501
Wen, Gezheng; Aizenman, Avigael; Drew, Trafton et al. (2016) Computational assessment of visual search strategies in volumetric medical images. J Med Imaging (Bellingham) 3:015501
Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M (2016) Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed. Atten Percept Psychophys 78:978-87

Showing the most recent 10 out of 69 publications