In the context of natural behavior, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and vision must provide the information needed to achieve those goals. The proposed work examines gaze and walking decisions in locomotion in outdoor environments, taking advantage of our novel system for measuring combined eye and body movements in these contexts. Currently we have only limited understanding of the constituent tasks in natural locomotion, or the requisite information, and the proposal attempts to specify these. in the context of natural gait, the patterns of optic flow are unexpectedly complex, raising questions about its role. The patterns of motion on the retina during locomotion depend critically on both eye and body motion, and these in turn depend on behavioral goals.
Our first Aim i s therefore to comprehensively describe the statistics of retinal motion patterns in a variety of terrains and task contexts. We will measure binocular eye and body movements while walking in outdoor terrains of varying roughness, crossing a busy intersection, and making coffee. These contexts will induce different gaze patterns. We will provide a comprehensive description of the motion stimulus in natural locomotion and help separate out self-motion signals from externally generated motion. These data will allow a more precise specification of the response patterns in cortical motion sensitive areas. Because of the complexity of natural motion patterns, we will re-examine the influence of optic flow on walking direction in a virtual reality environment and test alternative explanations for the role of flow. A central task in walking is foot placement, and we will focus on identifying the image properties that make a good foothold. Stereo, structure from motion, and spatial image structure are all likely contenders. We directly investigate the role of stereo in foothold selection by examining gait patterns in stereo-deficient subjects in terrains with varying degrees of roughness. Using a different strategy, we will attempt to predict gaze locations and footholds in rough terrain using convolution neural nets (CNN?s) to identify potential search templates for footholds in rough terrain. We will describe fixation patterns from crosswalk and sidewalk navigation and attempt to make inferences about their purpose, and use Modular Inverse Reinforcement Learning (MIRL) to predict direction decisions and decompose the behavior into sub-tasks. The collection of integrated gaze, body kinematics, and scene images in a range of natural environments is innovative, as little comparable data exists The work will be strengthened by the investigation of stereo- deficient subjects for whom there is almost no integrated eye and body data. Since much of the work in robotics has no visual input at all this should help in development of visual guidance for robots and also help better define the necessary information for individuals with impaired vision. The data set will be made publicly available.

Public Health Relevance

The central goal of this work is to understand vision in its natural context. This is very important information in order to devise suitable vision aids and rehabilitation strategies for individuals with visual impairments, and it is becoming increasingly accessible because of developments in technology for monitoring eye and body movements. The proposed work examines gaze and walking decisions in locomotion in outdoor environments, taking advantage of our novel system for measuring combined eye and body movements in these contexts. Currently we have only limited understanding of the constituent tasks and requisite information in natural locomotion, and the proposal attempts to specify these. The collection of integrated gaze, body kinematics, and scene images in a range of natural environments is innovative, as little comparable data exists. The work will be strengthened by the investigation of stereo-deficient subjects for whom there is almost no integrated eye and body data. Since much of the work in robotics has no visual input at all this should help in development of visual guidance for robots and also help better define the necessary information for individuals with impaired vision. The data set will be made publicly available.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
2R01EY005729-32
Application #
9830977
Study Section
Mechanisms of Sensory, Perceptual, and Cognitive Processes Study Section (SPC)
Program Officer
Wiggs, Cheri
Project Start
1984-07-01
Project End
2024-08-31
Budget Start
2019-09-01
Budget End
2020-08-31
Support Year
32
Fiscal Year
2019
Total Cost
Indirect Cost
Name
University of Texas Austin
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78759
Hayhoe, Mary M (2018) Davida Teller Award Lecture 2017: What can be learned from natural behavior? J Vis 18:10
Matthis, Jonathan Samir; Yates, Jacob L; Hayhoe, Mary M (2018) Gaze and the Control of Foot Placement When Walking in Natural Terrain. Curr Biol 28:1224-1233.e5
Li, Chia-Ling; Aivar, M Pilar; Tong, Matthew H et al. (2018) Memory shapes visual search strategies in large-scale environments. Sci Rep 8:4324
McCann, Brian C; Hayhoe, Mary M; Geisler, Wilson S (2018) Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 18:12
Tong, Matthew H; Zohar, Oran; Hayhoe, Mary M (2017) Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 17:28
Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M et al. (2016) Memory and visual search in naturalistic 2D and 3D environments. J Vis 16:9
Boucart, Muriel; Delerue, Celine; Thibaut, Miguel et al. (2015) Impact of Wet Macular Degeneration on the Execution of Natural Actions. Invest Ophthalmol Vis Sci 56:6832-8
Gottlieb, Jacqueline; Hayhoe, Mary; Hikosaka, Okihide et al. (2014) Attention, reward, and information seeking. J Neurosci 34:15497-504
Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8

Showing the most recent 10 out of 23 publications