How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense two-dimensional (2D) projections like a movie on a screen? Reconstructing 3D scene information from 2D retinal images is a highly complex problem, made evident by the great difficulty robots have in turning visual inputs into appropriate 3D motor outputs to move physical chessmen on a cluttered board, even though they can beat the best human chess players. The goal of this proposal is to elucidate how hierarchical cortical circuits implement robust (i.e., accurate & precise) 3D visual perception. Towards this end, we will answer two fundamental questions about how the brain achieves the 2D-to-3D visual transformation using behavioral, electrophysiological, and neuro- imaging approaches.
In Aim 1, we will answer the question of how the visual system represents the spatial pose (i.e., position & orientation) of objects in 3D space. Our hypothesis is that 3D scene information is reconstructed within the V1 ? V3A ? CIP pathway. We will test this hypothesis by simultaneously recording 3D pose tuning curves from V3A and CIP neurons in macaque monkeys while the animals perform an eight-alternative 3D orientation discrimination task. This experiment will dissociate neural responses to 3D pose that reflect elementary receptive field structures (resulting in 3D orientation preferences that vary with position-in-depth, which we anticipate to find in V3A) from those that represent 3D object features (resulting in 3D orientation preferences that are invariant to position-in-depth, which we anticipate to find in CIP). Using these data, we will additionally test for functional correlates between neural activity in each area and perceptual sensitivity. Through application of Granger Causality Analysis to simultaneous local field potential recordings in V3A and CIP, we will further test for feedforward/feedback influences between the areas to evaluate their hierarchical structure.
In Aim 2, we will answer the question of how binocular disparity cues (differences in where an object's image falls on each retina) and perspective cues (features resulting from 2D retinal projections of the 3D world) are integrated at the perceptual and neuronal levels to achieve robust 3D visual representations. Both cues provide valuable 3D scene information, and human perceptual studies show that their integration is dynamically reweighted depending on the viewing conditions (i.e., position-in-depth & orientation-in-depth) to achieve robust 3D percepts. Specifically, greater weight is assigned to the more reliable cue based on the viewing conditions; but, where and how this sophisticated integrative process is implemented in the brain is unknown. We anticipate that V3A and CIP will each show sensitivity to both cue types, but only CIP will dynamically reweight the cues to achieve robust 3D representations. This research is important for understanding ecologically relevant sensory processing and neural computations that are required for us to successfully interact with our 3D environment. Insights from this work will also extend beyond 3D vision by elucidating processes implemented by neural circuits to solve highly nonlinear optimization problems that turn ambiguous sensory signals into robust perceptions.

Public Health Relevance

This research will contribute fundamental knowledge about how the brain transforms two-dimensional (2D) visual representations of the world into three-dimensional (3D) visual perception. In the short term, this work will improve our understanding of how the brain processes ecologically relevant sensory information, and in the long term provide fundamental knowledge for understanding/treating neurophysiological disorders and cognitive dysfunction. Because 3D visual perception is a major component of virtual reality (VR), the public health relevance of this work also extends into the entertainment industry which faces growing demands to develop VR that is safe, persuasive, ergonomic, and non-nauseating.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY029438-03
Application #
9994954
Study Section
Mechanisms of Sensory, Perceptual, and Cognitive Processes Study Section (SPC)
Program Officer
Flanders, Martha C
Project Start
2018-09-01
Project End
2023-08-31
Budget Start
2020-09-01
Budget End
2021-08-31
Support Year
3
Fiscal Year
2020
Total Cost
Indirect Cost
Name
University of Wisconsin Madison
Department
Neurosciences
Type
Schools of Medicine
DUNS #
161202122
City
Madison
State
WI
Country
United States
Zip Code
53715