Accurate perception of the three-dimensional (3D) structure of the environment is essential to daily function. 3D vision requires the brain to reconstruct the depth structure of the environment from the sequence of 2D retinal images arriving at the eyes. Most of our knowledge about the neural mechanisms of 3D vision is limited to the case of stationary observers viewing static surfaces; in contrast, objects typically move in depth and observers often need to judge 3D scene structure while they are also moving. To arrive at a deeper understanding of how the brain computes depth under dynamic viewing conditions, we need to elucidate the mechanisms by which visual neurons compute the motion of objects in depth, as well as the neural computations that underlie perception of depth from motion parallax cues that arise during self- motion. We propose a series of experiments that take important steps toward this more general understanding of the neural basis of depth perception.
Aim #1 examines the mechanisms by which neurons signal motion-in-depth via binocular cues. Recent work established that neurons in area MT signal motion-in-depth based on both interocular velocity differences and changing disparity cues, but the mechanisms of this selectivity remain unknown.
Aim #2 examines how global patterns of rotational optic flow resulting from observer movement are used by the brain to interpret depth from motion parallax. We hypothesize that these ?dynamic perspective? cues are encoded by neurons in area MSTd with very large receptive fields, and that these neurons also carry integrated efference copy signals regarding eye rotation.
Aim #3 examines how extra- retinal signals related to eye and body rotation are combined and used to compute depth from motion parallax. At both neural and behavioral levels, we test a specific theoretically-motivated hypothesis for how eye and body rotation signals should be integrated to compute depth. A major strength of the proposed work is that it rigorously explores the interaction of multiple visual and extra-retinal signals in tightly-controlled experiments with clear theoretical predictions. The proposed research is directly relevant to the research priorities of the Strabismus, Amplyopia, and Visual Processing program at the National Eye Institute.

Public Health Relevance

This proposal addresses fundamental mechanisms underlying our ability to see objects moving in 3D space and to judge scene structure during self-motion. Because deficits in depth perception can arise from various developmental disorders and disease states, it is important to understand binocular mechanisms of 3D vision, as well as how depth perception can be supported by monocular systems that compute depth from motion. Understanding these computations will aid development of prosthetics for vision restoration in mobile patients.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY013644-19
Application #
9897640
Study Section
Mechanisms of Sensory, Perceptual, and Cognitive Processes Study Section (SPC)
Program Officer
Flanders, Martha C
Project Start
2001-07-05
Project End
2021-03-31
Budget Start
2020-04-01
Budget End
2021-03-31
Support Year
19
Fiscal Year
2020
Total Cost
Indirect Cost
Name
University of Rochester
Department
Ophthalmology
Type
Schools of Arts and Sciences
DUNS #
041294109
City
Rochester
State
NY
Country
United States
Zip Code
14627
Zaidel, Adam; DeAngelis, Gregory C; Angelaki, Dora E (2017) Decoupled choice-driven and stimulus-related activity in parietal neurons may be misrepresented by choice probabilities. Nat Commun 8:715
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C (2017) Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT. J Neurosci 37:8180-8197
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C (2015) A novel role for visual perspective cues in the neural computation of depth. Nat Neurosci 18:129-37
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C (2015) A functional link between MT neurons and depth perception based on motion parallax. J Neurosci 35:2766-77
Sanada, Takahisa M; DeAngelis, Gregory C (2014) Neural representation of motion-in-depth in area MT. J Neurosci 34:15508-21
Nadler, Jacob W; Barbash, Daniel; Kim, HyungGoo R et al. (2013) Joint representation of depth from motion parallax and binocular disparity cues in macaque area MT. J Neurosci 33:14061-74, 14074a
Sanada, Takahisa M; Nguyenkim, Jerry D; Deangelis, Gregory C (2012) Representation of 3-D surface orientation by velocity and disparity gradient cues in area MT. J Neurophysiol 107:2109-22
Rao, Vinod; DeAngelis, Gregory C; Snyder, Lawrence H (2012) Neural correlates of prior expectations of motion in the lateral intraparietal and middle temporal areas. J Neurosci 32:10063-74
Anzai, Akiyuki; Chowdhury, Syed A; DeAngelis, Gregory C (2011) Coding of stereoscopic depth information in visual areas V3 and V3A. J Neurosci 31:10270-82
Anzai, Akiyuki; DeAngelis, Gregory C (2010) Neural computations underlying depth perception. Curr Opin Neurobiol 20:367-75

Showing the most recent 10 out of 24 publications