Neural plasticity and perceptual learning are fundamental in the developmental stages of vision, in attaining expertise in specialized perceptual tasks, and in recovery from brain injuries and low- vision disorders. One important process in perceptual learning is the improvement in humans' ability to use task-relevant (signal) information. Although there have been advances in the understanding of the dynamics and algorithms mediating how humans optimize the selection of task relevant visual information, little is known about how eye movement patterns vary with practice and their impact in optimizing perceptual performance. Yet, in real world environments, eye movements are a critical component of active vision as humans explore the visual scene to make perceptual judgments. Understanding perceptual learning in human daily life requires studying the mechanisms mediating the changes in the planning of eye movements with learning and their contributions to optimizing perceptual performance. We hypothesize that two new experimental paradigms with digitally designed visual stimuli, in conjunction with eye position recording, and a newly developed foveated ideal observer and Bayesian learner will help elucidate how humans learn to strategize their eye movements and the contributions of the optimized sampling of the images to improvements in perceptual learning. The proposed work will address the following questions: 1) Do humans use learned information about the statistical properties of the visual stimuli and the requirements of the task at hand to strategize their eye movements to optimize the foveal sampling of the visual scene and perceptual performance?;2) Do humans use knowledge of the varying resolution of their foveated visual system to optimally learn to plan eye movements for a given set of visual stimuli and task?;3) What are the contributions of learning to strategize eye movements to the overall improvements in perceptual performance in ecologically important tasks such as face recognition, object identification and visual search?;4) How do human fixation patterns and performance benefits from strategizing eye movements compare to an optimal foveated observer and learner? The proposed work will improve our understanding of the human neural algorithms mediating the dynamics of adult perceptual learning during active vision for ecologically important tasks. The proposed experimental protocols and theoretical developments will also provide a novel, powerful and flexible framework with which other researchers can study eye movements and learning of humans undergoing visual loss recovery as well as patients with learning disabilities.

Public Health Relevance

The proposed work benefits public health by increasing our understanding of how humans learn to move their eyes to potentially informative regions of the visual scene in important daily tasks such as identifying faces or searching for objects. Thorough understanding of these mechanisms in normal humans will allow identification of learning anomalies in patients recovering from visual-loss or learning disabilities and potentially develop tests to assess treatments.

National Institute of Health (NIH)
National Eye Institute (NEI)
Research Project (R01)
Project #
Application #
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Wiggs, Cheri
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of California Santa Barbara
Schools of Arts and Sciences
Santa Barbara
United States
Zip Code
Kurki, Ilmari; Eckstein, Miguel P (2014) Template changes with perceptual learning are driven by feature informativeness. J Vis 14:
Peterson, Matthew F; Eckstein, Miguel P (2013) Individual differences in eye movements during face identification reflect observer-specific optimal points of fixation. Psychol Sci 24:1216-25
Eckstein, Miguel P (2011) Visual search: a retrospective. J Vis 11:
Peterson, Matthew F; Abbey, Craig K; Eckstein, Miguel P (2009) The surprisingly high human efficiency at learning to recognize faces. Vision Res 49:301-14
Droll, Jason A; Abbey, Craig K; Eckstein, Miguel P (2009) Learning cue validity through performance feedback. J Vis 9:18.1-23
Abbey, Craig K; Pham, Binh T; Shimozaki, Steven S et al. (2008) Contrast and stimulus information effects in rapid learning of a visual task. J Vis 8:8.1-14
Shimozaki, Steven S; Chen, Kelly Y; Abbey, Craig K et al. (2007) The temporal dynamics of selective attention of the visual periphery as measured by classification images. J Vis 7:10.1-20
Eckstein, Miguel P; Beutter, Brent R; Pham, Binh T et al. (2007) Similar neural representations of the target for saccades and perception during search. J Neurosci 27:1266-70