The overall goal of the grant is to understand the way that vision functions in the context of ordinary behavior. To do this we must understand the transition between the low level perceptual machinery and the ongoing sensory-motor behavior, a transition that spans a time scale of a few hundred milliseconds to that of several seconds. In the context of ongoing behavior, vision is highly selective. The information extracted from the image during a fixation is fragmentary, and driven by the immediate task demands: in different fixations on a given location, different information may be acquired, although the retinal image is identical in each case. This selectivity leads to a very different understanding of vision, not as performing a general purpose set of transformations, but as dynamic process that extracts specific, limited information from the image for the immediate task. Our research goal is to characterize the nature and extent of this selectivity. In addition, the experiments lay the foundation for a rigorous understanding of how elementary visual and motor operations are composed into more complex ongoing behavior. Thus the grant develops new paradigms to validate this functional approach to vision. The fragmentary nature of millisecond visual representation presents a problem for coordinating larger behaviors. Visual representations must be sufficiently extensive to preserve the continuity of visual experience and mediate coordinated movements. Thus the experiments focus on what information is extracted and retained across fixations, whether it is used to guide subsequent eye movements, and how different kinds of information are composed to make up larger behavioral units.
Specific aims are (I) Does an implicit memory representation for scene structure provide the representational substrate for """"""""marking"""""""" salient locations in a scene and for selecting the target for saccade? (2) Do observers use the same representational mechanisms to access information in a more spatially realistic visual environment where the scale of the movements is different? Does coding by remembered location involves loss of information about object features? (3)Can a complex task like driving be broken down into the sequential application of sub-component tasks with specific control variables, and how are these sub-components integrated? The goal is to understand the balance between task-directed, top-down visual processing, and bottom up or stimulus-driven visual processes. Independent of issues about visual representations, we need to look at performance in the context of ordinary tasks because we know very little about how vision is used in ordinary circumstances. This information shapes the questions we ask, and also is directly applicable to a number of practical and clinical issues, such as the impact of particular neural deficits on everyday visually guided behavior.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY005729-18
Application #
6384490
Study Section
Visual Sciences B Study Section (VISB)
Program Officer
Oberdorfer, Michael
Project Start
1984-07-01
Project End
2003-05-31
Budget Start
2001-05-31
Budget End
2002-05-31
Support Year
18
Fiscal Year
2001
Total Cost
$237,608
Indirect Cost
Name
University of Rochester
Department
Miscellaneous
Type
Schools of Arts and Sciences
DUNS #
208469486
City
Rochester
State
NY
Country
United States
Zip Code
14627
Matthis, Jonathan Samir; Yates, Jacob L; Hayhoe, Mary M (2018) Gaze and the Control of Foot Placement When Walking in Natural Terrain. Curr Biol 28:1224-1233.e5
Li, Chia-Ling; Aivar, M Pilar; Tong, Matthew H et al. (2018) Memory shapes visual search strategies in large-scale environments. Sci Rep 8:4324
McCann, Brian C; Hayhoe, Mary M; Geisler, Wilson S (2018) Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 18:12
Hayhoe, Mary M (2018) Davida Teller Award Lecture 2017: What can be learned from natural behavior? J Vis 18:10
Tong, Matthew H; Zohar, Oran; Hayhoe, Mary M (2017) Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 17:28
Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M et al. (2016) Memory and visual search in naturalistic 2D and 3D environments. J Vis 16:9
Boucart, Muriel; Delerue, Celine; Thibaut, Miguel et al. (2015) Impact of Wet Macular Degeneration on the Execution of Natural Actions. Invest Ophthalmol Vis Sci 56:6832-8
Gottlieb, Jacqueline; Hayhoe, Mary; Hikosaka, Okihide et al. (2014) Attention, reward, and information seeking. J Neurosci 34:15497-504
Kit, Dmitry; Katz, Leor; Sullivan, Brian et al. (2014) Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One 9:e94362
Hayhoe, Mary; Ballard, Dana (2014) Modeling task control of eye movements. Curr Biol 24:R622-8

Showing the most recent 10 out of 23 publications