Few behaviors are more important for successfully accomplishing everyday activities than searching the local environment with the eyes for specific objects or places. Humans, like other primates and many mammals, have high spatial resolution in only a small region (the fovea) and poorer spatial resolution over the remaining large region (the periphery). Thus, to search for objects and places the eyes rapidly jump (saccade) from one location to another across the visual scene. In the previous project period, we derived the mathematical theory of how to optimally move the eyes during visual search, and we compared the predictions of this theory to human eye-movement patterns and performance (speed and accuracy) in a visual search task. We found that when humans are searching for small targets hidden in images of naturalistic random texture, they achieve (with practice) nearly optimal performance, and they display a number of eye movement patterns that are characteristic of the ideal searcher. Our new theory is important because: (i) it tells us precisely what the relevant functional components (neural subsystems) within the brain should compute during visual search, (ii) it provides a performance benchmark against which to evaluate candidate hypotheses for the actual neural subsystems, (iii) the theory applies to eyes with different patterns of spatial resolution across the visual field and thus it can tell us how the eyes should be moved for different animals and for humans with different kinds of visual field loss. Our recent experimental results are important because they strongly constrain hypotheses for the various functional components underlying visual search. Specifically, for each of the components, our results rule out any hypothesis that implies performance significantly below human performance (which is close to optimal).
The first aim of the proposed research is to conduct a series of computational analyses and behavioral experiments directed at understanding how the human visual system implements the various functional components of fixation search. The ideal searcher and various sub-ideal searchers we have derived provide rigorous predictions for the alternative hypotheses.
The second aim i s to generalize our approach to a wider range of natural search tasks. Our previous work was confined to the task of searching for a single known target randomly located in a background of statistically homogeneous texture. We propose to (i) derive ideal and sub-ideal searchers for tasks with more uncertainty about the target and with heterogeneous backgrounds, (ii) determine how well human observers perform in these tasks relative to ideal, and (iii) test between the different model searchers.
The third aim i s to examine the effects of practice when the subjects'visual fields are altered using gaze-contingent software;these studies will provide insight into how human observers should, and how well they do, adapt their eye movement patterns to visual field loss.

Public Health Relevance

In previous work we derived the mathematically optimal strategy for moving the eyes when searching for objects in naturalistic images, and we found that healthy humans move their eyes and perform (in speed and accuracy) in a near optimal fashion. Here, the primary aim is to investigate in more detail how the brain accomplishes this amazing feat, but we will also determine the optimal eye movement strategies for different forms of visual field loss and determine how well normal humans adjust their eye movement strategies following the onset of simulated visual field loss. This work could lead to useful methods for determining the best performance that a patient and his/her doctor might expect (initially and with practice) in the important everyday task of visual search, and it could be valuable in developing and evaluating rehabilitation procedures.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY002688-33
Application #
8204932
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Wiggs, Cheri
Project Start
1981-12-01
Project End
2013-11-30
Budget Start
2011-12-01
Budget End
2013-11-30
Support Year
33
Fiscal Year
2012
Total Cost
$316,734
Indirect Cost
$102,894
Name
University of Texas Austin
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Benvenuti, Giacomo; Chen, Yuzhi; Ramakrishnan, Charu et al. (2018) Scale-Invariant Visual Capabilities Explained by Topographic Representations of Luminance and Texture in Primate V1. Neuron 100:1504-1512.e4
Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M et al. (2015) Visual search under scotopic lighting conditions. Vision Res 113:155-68
Bradley, Chris; Abrams, Jared; Geisler, Wilson S (2014) Retina-V1 model of detectability across the visual field. J Vis 14:
Michel, Melchi M; Chen, Yuzhi; Geisler, Wilson S et al. (2013) An illusion predicted by V1 population activity implicates cortical topography in shape perception. Nat Neurosci 16:1477-83
Geisler, Wilson S (2011) Contributions of ideal observer theory to vision research. Vision Res 51:771-81
Michel, Melchi; Geisler, Wilson S (2011) Intrinsic position uncertainty explains detection and localization performance in peripheral vision. J Vis 11:18
Najemnik, Jiri; Geisler, Wilson S (2009) Simple summation rule for optimal fixation selection in visual search. Vision Res 49:1286-94
Sit, Yiu Fai; Chen, Yuzhi; Geisler, Wilson S et al. (2009) Complex dynamics of V1 population responses explained by a simple gain-control model. Neuron 64:943-56
Najemnik, Jiri; Geisler, Wilson S (2008) Eye movement statistics in humans are consistent with an optimal search strategy. J Vis 8:4.1-14
Geisler, Wilson S (2008) Visual perception and the statistical properties of natural scenes. Annu Rev Psychol 59:167-92

Showing the most recent 10 out of 33 publications