Modern civilization asks trained experts to perform a range of critical visual search tasks such as satellite image interpretation, airport baggage screening, and quality monitoring in manufacturing. In medicine, visual search through complex medical images is critical to detection and diagnosis of disease (e.g. screening for breast or lung cancer, evaluation of stroke, detection of internal injuries after trauma). The important search tasks of modern civilization are quite varied but they share significant characteristics. They are demanding tasks, carried out by experts who are asked to perform with high accuracy. Very significant resources are devoted to these tasks and very substantial costs accompany errors. These are quite literally matters of life and death. Nevertheless, committed professionals do not perform those tasks as well as would be desired or expected. Why are their error rates as high as they are? Search tasks are not new. Animals have always searched for food and mates. This proposal tests the hypothesis that the processes of search that served us well as visual foragers in the natural world may not serve as well when we search for tumors or fractures. Errors can arise when natural mechanisms of search meet the artificial demands of medical image perception. Errors can be reduced by developing and applying an understanding of the human search engine. Our basic research strategy is to bring problems from the clinical setting into the lab where they can be extensively studied in non-expert populations and then to use those basic research results to generate highly focused hypotheses that can be tested in the clinical setting. There are three specific aims: 1) Many modern medical imaging devices create 3D volumes of data (e.g. CT and MRI).
Aim 1 tests the hypothesis that the presentation of 3D data changes the sources of errors in medical image perception (e.g. by increasing the chance that a specific region of a 3D dataset will not be attended). 2) Many medical image perception tasks are """"""""foraging"""""""" tasks where observers searching for multiple targets in displays extended in time and space. Foraging has been a topic of interest in the animal behavior literature but only minimally in the human visual search literature. When is it time to stop foraging and move on? Aim 2 tests the hypothesis that the foraging rules that work in the natural world may be a source of errors in the clinic. 3) The third aim is to fuse models of visual search and models of medical image perception into a more comprehensive model. Search models from the lab have dealt with search tasks that last for about a second. Medical image perception models are concerned with tasks that last for minutes or more.
Aim 3 tests the hypothesis that basic search models can be extended to this longer time frame with changes that will be constrained by the research proposed here.

Public Health Relevance

Errors in medical image perception tasks like screening for breast cancer are distressingly high. In a substantial proportion of cases, the sign of disease is visible but, nevertheless, missed by a well-intentioned, well-trained expert. Since humans will be performing these tasks for the foreseeable future, the goal of this research is to understand the perceptual and attentional factors that produce these errors so that counter-measures can be devised to prevent them.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY017001-06
Application #
8258718
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2005-12-01
Project End
2014-04-30
Budget Start
2012-05-01
Budget End
2013-04-30
Support Year
6
Fiscal Year
2012
Total Cost
$438,668
Indirect Cost
$188,668
Name
Brigham and Women's Hospital
Department
Type
DUNS #
030811269
City
Boston
State
MA
Country
United States
Zip Code
02115
Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M (2016) Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity. Psychon Bull Rev 23:201-12
Josephs, Emilie L; Draschkow, Dejan; Wolfe, Jeremy M et al. (2016) Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychol (Amst) 169:100-8
Josephs, Emilie; Drew, Trafton; Wolfe, Jeremy (2016) Shuffling your way out of change blindness. Psychon Bull Rev 23:193-200
Wen, Gezheng; Aizenman, Avigael; Drew, Trafton et al. (2016) Computational assessment of visual search strategies in volumetric medical images. J Med Imaging (Bellingham) 3:015501
Ehinger, Krista A; Allen, Kala; Wolfe, Jeremy M (2016) Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed. Atten Percept Psychophys 78:978-87
Drew, Trafton; Aizenman, Avi M; Thompson, Matthew B et al. (2016) Image toggling saves time in mammography. J Med Imaging (Bellingham) 3:011003
Ehinger, Krista A; Wolfe, Jeremy M (2016) When is it time to move to the next map? Optimal foraging in guided visual search. Atten Percept Psychophys 78:2135-51
Evans, Karla K; Haygood, Tamara Miner; Cooper, Julie et al. (2016) A half-second glimpse often lets radiologists identify breast cancer cases even when viewing the mammogram of the opposite breast. Proc Natl Acad Sci U S A 113:10292-7
Võ, Melissa L-H; Aizenman, Avigael M; Wolfe, Jeremy M (2016) You think you know where you looked? You better look again. J Exp Psychol Hum Percept Perform 42:1477-81
Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P et al. (2016) Hybrid foraging search: Searching for multiple instances of multiple types of target. Vision Res 119:50-9

Showing the most recent 10 out of 56 publications