The proposed research will attempt to provide a unified theory of task control of gaze and walking trajectories as humans move through natural environments. Until recently this goal would have been intractable, but a number of recent research results have illuminated the connection between simple sensory- motor decisions and behavioral goals. In particular, reinforcement-learning algorithms use reward signals to predict optimal behavior, and the central role of reward is well established in neurophysiological studies. Nonetheless it is unclear how these mechanisms determine natural visually guided behavior. Since natural gaze behavior is tightly linked to behavioral goals, reinforcement learning has the potential for understanding how behaviorally relevant targets are selected. We will develop a theoretical framework based on reinforcement learning for understanding sensory-motor decisions when humans move through natural environments. We first use Inverse Reinforcement Learning methods to estimate the internal reward associated with different behavioral goals when subjects navigate through obstacles and targets in a virtual environment, and then use the estimated reward values to predict the specific fixation sequences made while performing the task. We will test whether reward-weighted uncertainty determines gaze changes, predict gaze allocation in novel environments, and test how reward and uncertainty combine. A critical feature of the approach taken here is the decomposition of complex behavior into a set of sub-tasks. This approach has the potential for making complex behavior theoretically tractable and we will test this assumption. We will attempt to identify and quantify the potential sources of uncertainty such as sensory encoding, decay in spatial working memory, and uncertainty stemming from the observer's own motion in the environment Prior knowledge of an environment allows more efficient allocation of attention to novel or unstable regions. We will attempt to model the development of memory representations as a reduction in uncertainty, and evaluate how prior knowledge changes attentional allocation in uncertain environments. The work represents a major advance by developing a theoretical context for understanding selection of gaze targets in a moving observer. To date, formal theoretical approaches to decision making have addressed highly simplified scenarios. Because we are investigating natural vision there are very direct implications for both clinical and human factors situations involving multi-tasking. Eye movements are diagnostic of a variety of neural disorders and the exploration of normal gaze patterns in natural tasks provides essential data for comparison with disease states.

Public Health Relevance

A central feature of natural, visually-guided behavior is that visual information is actively sampled from the environment by a sequence of gaze changes. The goal of this proposal is to develop an empirical and theoretical understanding of the sensory-motor decisions that control this sampling process as observers move through natural environments. The present experiments help define what tasks subjects need to perform when walking, and what information might be needed. This is a necessary first step that will lay the groundwork for investigation of clinical populations, since patient data needs to be interpreted in the context of normal performance. The experiments also have direct relevance to safety issues in driving and any situation involving multi-tasking. The development of virtual environments for natural tasks is important as it allows us to safely investigate situations that might be dangerous to test in the real world.

National Institute of Health (NIH)
National Eye Institute (NEI)
Research Project (R01)
Project #
Application #
Study Section
Special Emphasis Panel (SPC)
Program Officer
Wiggs, Cheri
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Texas Austin
Schools of Arts and Sciences
United States
Zip Code
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8
Delerue, Celine; Hayhoe, Mary; Boucart, Muriel (2013) Eye movements during natural actions in patients with schizophrenia. J Psychiatry Neurosci 38:317-24
Iorizzo, Dana B; Riley, Meghan E; Hayhoe, Mary et al. (2011) Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study. Vision Res 51:1173-84
Tatler, Benjamin W; Hayhoe, Mary M; Land, Michael F et al. (2011) Eye guidance in natural vision: reinterpreting salience. J Vis 11:5
Hamid, Sahar N; Stankiewicz, Brian; Hayhoe, Mary (2010) Gaze patterns in navigation: encoding information in large-scale environments. J Vis 10:28
Hayhoe, Mary; Gillam, Barbara; Chajka, Kelly et al. (2009) The role of binocular vision in walking. Vis Neurosci 26:73-80
Ballard, Dana H; Hayhoe, Mary M (2009) Modelling the role of task in the control of gaze. Vis cogn 17:1185-1204
Huxlin, Krystel R; Martin, Tim; Kelly, Kristin et al. (2009) Perceptual relearning of complex visual motion after V1 damage in humans. J Neurosci 29:3981-91
Sullivan, Brian; Jovancevic-Misic, Jelena; Hayhoe, Mary et al. (2008) Use of multiple preferred retinal loci in Stargardt's disease during natural tasks: a case study. Ophthalmic Physiol Opt 28:168-77
Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M et al. (2006) Learning receptive fields using predictive feedback. J Physiol Paris 100:125-32