The goal of this proposal is to understand human vision in the context of natural behavior. This is of fundamental importance because relatively little is known about how vision functions in the natural world, and many important issues arise in this setting that are absent, or difficult to address, in standard paradigms. It is our contention that understanding many aspects of vision, such as selective attention and control of gaze will be impossible without investigation of vision in its natural context. Previous attempts to explain gaze patterns have almost exclusively concerned only static, restricted stimulus conditions, and focused on the properties of the stimulus. Such models cannot extend to natural behavior where the visual input is dynamic, and where the observer's behavioral goals play a dominant role. The pervasive effect of reward in the neural circuitry underlying saccadic eye movements, the development of the mathematics of Reinforcement Learning, and the application of statistical decision theory to understanding sensory-motor behavior all allow a novel framework for understanding the sequential acquisition of visual information in the context of normal behavior. We explore evidence for this framework in the proposal. Specifically, we examine the role of reward, uncertainty, and prior knowledge in the control of gaze in the natural world, with the goal of providing a formal structure for understanding complex behavioral sequences. The research focuses on dynamic environments where a central open question is how the visual system balances the need to attend to existing goals against the need to maintaining sensitivity to new information that may pose opportunities or threats. We investigate the role of reward, uncertainty, and prior knowledge in control of gaze in experiments in an immersive virtual walking environment. We then test the predictions of a model based on reinforcement learning in a simple divided attention task. We then use recently developed methods of Inverse Reinforcement Learning to estimate the intrinsic rewards of different behavioral goals in the walking environment. This will allow us, for the first time, to infer intrinsic human rewards directly, and to predict gaze sequences in novel situations. We also explore the nature and complexity of prior knowledge and the role of prediction in intercepting moving objects in a virtual environment. Although investigation of natural behavior is challenging, the potential benefit is that it provides the empirical basis and theoretical tools for understanding how complex behavioral sequences are generated. The experiments will lay the groundwork for investigation of clinical populations in contexts such as walking, and will also have direct relevance to safety issues in driving and any situation involving multi-tasking.

Public Health Relevance

This grant explores the use of vision and the control of eye movements in the context of natural, visually guided behavior. The present experiments help define what tasks subjects need to perform when walking, and what information might be needed. This is a necessary first step that will lay the groundwork for investigation of clinical populations, since patient data needs to be interpreted in the context of normal performance. The experiments also have direct relevance to safety issues in driving and any situation involving multi-tasking. The development of virtual environments for natural tasks is important as it allows us to safely investigate situations that might be dangerous to test in the real world.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY005729-26
Application #
8056834
Study Section
Cognitive Neuroscience Study Section (COG)
Program Officer
Wiggs, Cheri
Project Start
1984-07-01
Project End
2013-04-30
Budget Start
2011-05-01
Budget End
2012-04-30
Support Year
26
Fiscal Year
2011
Total Cost
$357,012
Indirect Cost
Name
University of Texas Austin
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Li, Chia-Ling; Aivar, M Pilar; Tong, Matthew H et al. (2018) Memory shapes visual search strategies in large-scale environments. Sci Rep 8:4324
McCann, Brian C; Hayhoe, Mary M; Geisler, Wilson S (2018) Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 18:12
Hayhoe, Mary M (2018) Davida Teller Award Lecture 2017: What can be learned from natural behavior? J Vis 18:10
Matthis, Jonathan Samir; Yates, Jacob L; Hayhoe, Mary M (2018) Gaze and the Control of Foot Placement When Walking in Natural Terrain. Curr Biol 28:1224-1233.e5
Tong, Matthew H; Zohar, Oran; Hayhoe, Mary M (2017) Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 17:28
Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M et al. (2016) Memory and visual search in naturalistic 2D and 3D environments. J Vis 16:9
Boucart, Muriel; Delerue, Celine; Thibaut, Miguel et al. (2015) Impact of Wet Macular Degeneration on the Execution of Natural Actions. Invest Ophthalmol Vis Sci 56:6832-8
Gottlieb, Jacqueline; Hayhoe, Mary; Hikosaka, Okihide et al. (2014) Attention, reward, and information seeking. J Neurosci 34:15497-504
Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8

Showing the most recent 10 out of 23 publications