Synthetic cues (e.g., arrows and boxes) predictive of a target location speed up search times and result in increased decision accuracy. Similarly, when observers search in natural scenes, a highly visible object (e.g., house) that often co-occurs in natural environments with a sought target (e.g., chimney) will influence eye movements and facilitate search when the target appears close to the object. While the last few decades have seen single cell neurophysiology, human electrophysiology and neuroimaging lead to great advances in the understanding of the effects of attention and synthetic cues on neural activity, little is known about the underlying neural mechanisms mediating context effects during visual search in real scenes. Here, we propose to separately measure neural activity using functional magnetic resonance imaging (fMRI) while observers search for targets in real scenes and use neural decoding methods (multivariate pattern analysis) and a novel variation of population receptive field methods to: 1) Determine the brain areas (fMRI) that represent the spatial location of scene context and thus might mediate guidance of search in real scenes;2) To evaluate whether the coding of scene context is automatic or whether it is modulated by top-down visual attention. The proposed work will improve our understanding of the neural mechanisms of scene context which arguably is one of the most important strategies used by observers to optimize visual search in natural environments. Our results will also advance our understanding of the function and role of brain regions related to attention, objects/scenes, and contextual associations for visual search. These advances might potentially help in identifying neural correlates of poor behavioral performance for patients with low-vision and attentional deficits in an ecologically important task such as visual search in real scenes.

Public Health Relevance

When searching for objects, humans use knowledge of the layout of the scene to guide eye movements and facilitate finding the sought object. We seek to use non-invasive measures of human neural activity to identify the brain mechanisms that mediate this guidance by scene context. The tools and framework developed could potentially lead to assessment of neural signatures of patients with visual and attentional deficits.

National Institute of Health (NIH)
National Eye Institute (NEI)
Exploratory/Developmental Grants (R21)
Project #
Application #
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Steinmetz, Michael A
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of California Santa Barbara
Schools of Arts and Sciences
Santa Barbara
United States
Zip Code
Eckstein, Miguel P; Schoonveld, Wade; Zhang, Sheng et al. (2015) Optimal and human eye movements to clustered low value cues to increase decision rewards during search. Vision Res 113:137-54
Koehler, Kathryn; Guo, Fei; Zhang, Sheng et al. (2014) What do saliency models predict? J Vis 14:14
Preston, Tim J; Guo, Fei; Das, Koel et al. (2013) Neural representations of contextual guidance in visual search of real-world scenes. J Neurosci 33:7846-55