The proposed research will attempt to provide a unified theory of task control of gaze and walking trajectories as humans move through natural environments. Until recently this goal would have been intractable, but a number of recent research results have illuminated the connection between simple sensory- motor decisions and behavioral goals. In particular, reinforcement-learning algorithms use reward signals to predict optimal behavior, and the central role of reward is well established in neurophysiological studies. Nonetheless it is unclear how these mechanisms determine natural visually guided behavior. Since natural gaze behavior is tightly linked to behavioral goals, reinforcement learning has the potential for understanding how behaviorally relevant targets are selected. We will develop a theoretical framework based on reinforcement learning for understanding sensory-motor decisions when humans move through natural environments. We first use Inverse Reinforcement Learning methods to estimate the internal reward associated with different behavioral goals when subjects navigate through obstacles and targets in a virtual environment, and then use the estimated reward values to predict the specific fixation sequences made while performing the task. We will test whether reward-weighted uncertainty determines gaze changes, predict gaze allocation in novel environments, and test how reward and uncertainty combine. A critical feature of the approach taken here is the decomposition of complex behavior into a set of sub-tasks. This approach has the potential for making complex behavior theoretically tractable and we will test this assumption. We will attempt to identify and quantify the potential sources of uncertainty such as sensory encoding, decay in spatial working memory, and uncertainty stemming from the observer's own motion in the environment Prior knowledge of an environment allows more efficient allocation of attention to novel or unstable regions. We will attempt to model the development of memory representations as a reduction in uncertainty, and evaluate how prior knowledge changes attentional allocation in uncertain environments. The work represents a major advance by developing a theoretical context for understanding selection of gaze targets in a moving observer. To date, formal theoretical approaches to decision making have addressed highly simplified scenarios. Because we are investigating natural vision there are very direct implications for both clinical and human factors situations involving multi-tasking. Eye movements are diagnostic of a variety of neural disorders and the exploration of normal gaze patterns in natural tasks provides essential data for comparison with disease states.

Public Health Relevance

A central feature of natural, visually-guided behavior is that visual information is actively sampled from the environment by a sequence of gaze changes. The goal of this proposal is to develop an empirical and theoretical understanding of the sensory-motor decisions that control this sampling process as observers move through natural environments. The present experiments help define what tasks subjects need to perform when walking, and what information might be needed. This is a necessary first step that will lay the groundwork for investigation of clinical populations, since patient data needs to be interpreted in the context of normal performance. The experiments also have direct relevance to safety issues in driving and any situation involving multi-tasking. The development of virtual environments for natural tasks is important as it allows us to safely investigate situations that might be dangerous to test in the real world.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY005729-30
Application #
9040943
Study Section
Mechanisms of Sensory, Perceptual, and Cognitive Processes Study Section (SPC)
Program Officer
Wiggs, Cheri
Project Start
1984-07-01
Project End
2018-03-31
Budget Start
2016-04-01
Budget End
2017-03-31
Support Year
30
Fiscal Year
2016
Total Cost
Indirect Cost
Name
University of Texas Austin
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Matthis, Jonathan Samir; Yates, Jacob L; Hayhoe, Mary M (2018) Gaze and the Control of Foot Placement When Walking in Natural Terrain. Curr Biol 28:1224-1233.e5
Li, Chia-Ling; Aivar, M Pilar; Tong, Matthew H et al. (2018) Memory shapes visual search strategies in large-scale environments. Sci Rep 8:4324
McCann, Brian C; Hayhoe, Mary M; Geisler, Wilson S (2018) Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 18:12
Hayhoe, Mary M (2018) Davida Teller Award Lecture 2017: What can be learned from natural behavior? J Vis 18:10
Tong, Matthew H; Zohar, Oran; Hayhoe, Mary M (2017) Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 17:28
Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M et al. (2016) Memory and visual search in naturalistic 2D and 3D environments. J Vis 16:9
Boucart, Muriel; Delerue, Celine; Thibaut, Miguel et al. (2015) Impact of Wet Macular Degeneration on the Execution of Natural Actions. Invest Ophthalmol Vis Sci 56:6832-8
Gottlieb, Jacqueline; Hayhoe, Mary; Hikosaka, Okihide et al. (2014) Attention, reward, and information seeking. J Neurosci 34:15497-504
Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8

Showing the most recent 10 out of 23 publications