The proposed research will attempt to provide a unified theory of task control of gaze and walking trajectories as humans move through natural environments. Until recently this goal would have been intractable, but a number of recent research results have illuminated the connection between simple sensory- motor decisions and behavioral goals. In particular, reinforcement-learning algorithms use reward signals to predict optimal behavior, and the central role of reward is well established in neurophysiological studies. Nonetheless it is unclear how these mechanisms determine natural visually guided behavior. Since natural gaze behavior is tightly linked to behavioral goals, reinforcement learning has the potential for understanding how behaviorally relevant targets are selected. We will develop a theoretical framework based on reinforcement learning for understanding sensory-motor decisions when humans move through natural environments. We first use Inverse Reinforcement Learning methods to estimate the internal reward associated with different behavioral goals when subjects navigate through obstacles and targets in a virtual environment, and then use the estimated reward values to predict the specific fixation sequences made while performing the task. We will test whether reward-weighted uncertainty determines gaze changes, predict gaze allocation in novel environments, and test how reward and uncertainty combine. A critical feature of the approach taken here is the decomposition of complex behavior into a set of sub-tasks. This approach has the potential for making complex behavior theoretically tractable and we will test this assumption. We will attempt to identify and quantify the potential sources of uncertainty such as sensory encoding, decay in spatial working memory, and uncertainty stemming from the observer's own motion in the environment Prior knowledge of an environment allows more efficient allocation of attention to novel or unstable regions. We will attempt to model the development of memory representations as a reduction in uncertainty, and evaluate how prior knowledge changes attentional allocation in uncertain environments. The work represents a major advance by developing a theoretical context for understanding selection of gaze targets in a moving observer. To date, formal theoretical approaches to decision making have addressed highly simplified scenarios. Because we are investigating natural vision there are very direct implications for both clinical and human factors situations involving multi-tasking. Eye movements are diagnostic of a variety of neural disorders and the exploration of normal gaze patterns in natural tasks provides essential data for comparison with disease states.
A central feature of natural, visually-guided behavior is that visual information is actively sampled from the environment by a sequence of gaze changes. The goal of this proposal is to develop an empirical and theoretical understanding of the sensory-motor decisions that control this sampling process as observers move through natural environments. The present experiments help define what tasks subjects need to perform when walking, and what information might be needed. This is a necessary first step that will lay the groundwork for investigation of clinical populations, since patient data needs to be interpreted in the context of normal performance. The experiments also have direct relevance to safety issues in driving and any situation involving multi-tasking. The development of virtual environments for natural tasks is important as it allows us to safely investigate situations that might be dangerous to test in the real world.
|Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M et al. (2016) Memory and visual search in naturalistic 2D and 3D environments. J Vis 16:9|
|Boucart, Muriel; Delerue, Celine; Thibaut, Miguel et al. (2015) Impact of Wet Macular Degeneration on the Execution of Natural Actions. Invest Ophthalmol Vis Sci 56:6832-8|
|Gottlieb, Jacqueline; Hayhoe, Mary; Hikosaka, Okihide et al. (2014) Attention, reward, and information seeking. J Neurosci 34:15497-504|
|Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8|
|Johnson, Leif; Sullivan, Brian; Hayhoe, Mary et al. (2014) Predicting human visuomotor behaviour in a driving task. Philos Trans R Soc Lond B Biol Sci 369:20130044|
|Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362|
|Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry et al. (2013) Real-time recording and classification of eye movements in an immersive virtual environment. J Vis 13:|
|Delerue, Celine; Hayhoe, Mary; Boucart, Muriel (2013) Eye movements during natural actions in patients with schizophrenia. J Psychiatry Neurosci 38:317-24|
|Iorizzo, Dana B; Riley, Meghan E; Hayhoe, Mary et al. (2011) Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study. Vision Res 51:1173-84|
|Tatler, Benjamin W; Hayhoe, Mary M; Land, Michael F et al. (2011) Eye guidance in natural vision: reinterpreting salience. J Vis 11:5|
Showing the most recent 10 out of 16 publications