Extensive research has provided a comprehensive understanding of the neural mechanisms of gaze deployment, but there is still a fundamental lack of understanding of the cognitive mechanisms that choose one possible fixation over another. Attempts to characterize these choices in terms of image properties have been a beginning, but at this point we can only go further by introducing cognitive factors. The proposed research uses driving in virtual reality as a controlled environment within which we can ask when and where fixations are made and what influences these choices. Driving is a complex but circumscribed skill that most subjects have extensive experience with. Our studies and those of others have demonstrated that other complex behaviors such as tea making and sandwich making can be readily seen as consisting as compositions of more elemental behaviors. Our modularized theory of behavior allows us to propose very specific testable hypotheses as to the deployment of gaze that are aimed at elucidating its essential link with cognition. Unique capabilities of our facility allow us to measure the eye, head and hand movements, as well as acceleration, braking and steering movements within the confines of a very realistic driving simulator that uses a state of the art Sensics wide field of view head-mounted binocular display (HMD). The simulator is mounted on a hydraulic platform that delivers realistic acceleration stimuli and the driver is immersed in a very complex cityscape driving venue rendered in real time on the HMD. Our theory is that the rules for the deployment of gaze are learned by reinforcement and based on reward-based optimality criteria. This theory is to be tested using human driving experiments as well as a human avatar driver that has realistic gaze movements with fixations. The avatar performs complicated tasks by decomposing them into essential modules. Each of the modules can achieve its goal by repeatedly recognizing crucial visual features in the scene and carrying out the relevant action. Our preliminary studies have successfully modeled human data from walking and making a sandwich and have suggested several hypotheses as to the conduct of human visually guided behaviors that we propose to develop and test using the more demanding virtual automobile driving environment. The proposed research will have three interrelated foci directed at three central questions in task-directed visual processing. 1. When is gaze deployed? Our theory suggests that gaze is deployed in the aid of the behavior that needs it the most. 2. Is the disposition of gaze reward-based? Since gaze is not easily shared among concurrent behaviors, there has to be some way of allocating it. This project will test a new analytical formulation that describes gaze competition in a multi-task situation. 3. How is visual alerting handled? How do humans recognize important interruptions from the visual environment? We will test a hypothesis is that a behavior for recognizing a new situation tries to successfully compete with the current behaviors by promising greater rewards.

Public Health Relevance

When we perform common everyday tasks, such as driving, making coffee or making a sandwich, we depend heavily on the ability to use our eyes. These eyes direct our actions by looking at the items we use in the task and also helping coordinate our arm and other body movements. We have a good idea how nerve cells make the eyes move from place to place, but we do not understand how our brain chooses one particular place over another. People have initially reasoned that image objects, such as a stop sign or a fire hydrant, are the main thing that commands our gaze, but we think its likely to depend on what people are thinking about from moment to moment. Our proposed research uses driving in virtual reality as a controlled environment where we can see where eye fixations are made and what influences these choices. Driving is a common skill with which most subjects have extensive experience and make similar eye fixations, so it's a good venue for our studies. Unique capabilities of our research facility allow us to measure the eye, head and hand movements, as well as acceleration, braking and steering movements within the confines of a very realistic driving simulator that uses a head-mounted binocular display (HMD). The simulator is mounted on a hydraulic platform that provides a sense of acceleration and the driver is immersed in a very complex cityscape that looks very much like reality. The combination of new instrumentation and analytical techniques proposed here should produce a detailed model of cognition that will help us understand disease-related cognitive problems in people and will spur the use of eye gaze in clinical diagnostic tools. Diseases like Schizophernia, Huntington's, Tourette's, Alzheimer's and ADHD all can be diagnosed through characteristically abnormal eye fixations. The even larger hope is that, by knowing how the eyes are used in these instances, we can get a general idea on how the brains function.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
2R01EY019174-14A1
Application #
7656984
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Steinmetz, Michael A
Project Start
2009-05-01
Project End
2012-04-30
Budget Start
2009-05-01
Budget End
2010-04-30
Support Year
14
Fiscal Year
2009
Total Cost
$331,600
Indirect Cost
Name
University of Texas Austin
Department
Biostatistics & Other Math Sci
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8
Diaz, Gabriel; Cooper, Joseph; Hayhoe, Mary (2013) Memory and prediction in natural gaze control. Philos Trans R Soc Lond B Biol Sci 368:20130064
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin et al. (2013) Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. J Vis 13:
Sullivan, Brian T; Johnson, Leif; Rothkopf, Constantin A et al. (2012) The role of uncertainty and reward on eye movements in a virtual driving task. J Vis 12:19
Ballard, Dana H; Jehee, Janneke F M (2011) Dual roles for spike signaling in cortical neural populations. Front Comput Neurosci 5:22
Tatler, Benjamin W; Hayhoe, Mary M; Land, Michael F et al. (2011) Eye guidance in natural vision: reinterpreting salience. J Vis 11:5
Rothkopf, Constantin A; Ballard, Dana H (2010) Credit assignment in multiple goal embodied visuomotor behavior. Front Psychol 1:173
Yi, Weilie; Ballard, Dana (2009) RECOGNIZING BEHAVIOR IN HAND-EYE COORDINATION PATTERNS. Int J HR 6:337-359