The overarching goal of this research is to better understand the human visual system, and how objects and their locations are perceived and represented in the brain. The proposal investigates a fundamental challenge for our visual systems: Visual information is coded relative to the eyes, but the eyes are constantly moving. How, then, do we achieve spatial stability? The world does not appear to jump with each eye movement, but this seamless percept belies a complicated computational process. Prior work by the PI has made significant first steps in understanding how spatial attention and memory are represented (or remapped) across eye movements. The current proposal makes a critical advance over prior studies in two key ways: by taking into account (1) 3D depth information, and (2) feature/object recognition. The questions of spatial remapping, object recognition and depth perception are central to our understanding of human perception and brain functioning. These topics are typically investigated separately; the proposal takes the novel approach of trying to integrate them together into a single theory of visual stability.
In Aim 1 we investigate how remapping accounts for 3D spatial information: first by establishing a clearer picture of how 3D spatial information is represented in the brain (Aim 1.1), and then building off this knowledge to ask how depth information is remapped across eye movements (Aim 1.2).
Aim 2 investigates how remapping interacts with feature and object recognition: first asking what type of location information is bound to object features/identity (Aim 2.1) and then testing if (and when) feature content is remapped across eye movements (Aim 2.2). The interdisciplinary research plan combines several techniques - behavioral, eye-tracking, and neuroimaging (fMRI and EEG) - to achieve a more thorough and extensive understanding of spatial stability across eye movements. The research proposed here will have an immediate impact on our understanding of typical visual functioning in healthy human populations. These advances could also have a longer-term impact on a variety of clinical applications, informing our knowledge and assessment of visual disorders resulting from eye disease, injury, brain damage, and development/aging.

Public Health Relevance

The research proposed here will improve our understanding of typical visual functioning in healthy human populations, which can open up broad reaching clinical applications for the future. In particular, the detailed mapping of 3D space in the brain could become a significant tool for understanding various visual disorders, including strabismus, macular degeneration, and stereo-blindness, and for assessing rehabilitation following treatment. Additionally, with a better understanding of spatial stability across eye movements, we can investigate whether this fundamental process is affected by aging, autism, schizophrenia, and depression (all of which are accompanied by changes in visual processing).

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY025648-02
Application #
9114111
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2015-08-01
Project End
2020-07-31
Budget Start
2016-08-01
Budget End
2017-07-31
Support Year
2
Fiscal Year
2016
Total Cost
$375,430
Indirect Cost
$125,430
Name
Ohio State University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
832127323
City
Columbus
State
OH
Country
United States
Zip Code
43210
Shafer-Skelton, Anna; Golomb, Julie D (2018) Memory for retinotopic locations is more accurate than memory for spatiotopic locations, even for visually guided reaching. Psychon Bull Rev 25:1388-1398
Berman, Daniel; Golomb, Julie D; Walther, Dirk B (2017) Scene content is predominantly conveyed by high spatial frequencies in scene-selective visual cortex. PLoS One 12:e0189828
Bapat, Avni N; Shafer-Skelton, Anna; Kupitz, Colin N et al. (2017) Binding object features to locations: Does the ""spatial congruency bias"" update with object movement? Atten Percept Psychophys 79:1682-1694
Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D (2017) Differential patterns of 2D location versus depth decoding along the visual hierarchy. Neuroimage 147:507-516
Finlayson, Nonie J; Golomb, Julie D (2017) 2D location biases depth-from-disparity judgments but not vice versa. Vis cogn 25:841-852
Shafer-Skelton, Anna; Kupitz, Colin N; Golomb, Julie D (2017) Object-location binding across a saccade: A retinotopic spatial congruency bias. Atten Percept Psychophys 79:765-781
Lescroart, Mark D; Kanwisher, Nancy; Golomb, Julie D (2016) No Evidence for Automatic Remapping of Stimulus Features or Location Found with fMRI. Front Syst Neurosci 10:53
Srinivasan, Ramprakash; Golomb, Julie D; Martinez, Aleix M (2016) A Neural Basis of Facial Action Recognition in Humans. J Neurosci 36:4434-42
Finlayson, Nonie J; Golomb, Julie D (2016) Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Res 127:49-56