One of the brain?s major functions is to represent the 3D structure of the world from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion between stationary objects at different distances (motion parallax, MP) provides potent depth information, in the absence of binocular cues. However, if an object is moving in the world, this complicates the computation of depth from MP since there will be an additional component of image motion related to the object?s motion in the world. Previous experimental and theoretical work on depth perception from MP has assumed the objects are stationary in the world. We propose to use a combination of human psychophysics and computational modelling to address, for the first time, how humans infer the depth of moving objects during self-motion. We consider two ways that the brain might compute the depth of moving objects from MP. First, if the brain can accurately parse retinal image motion into components related to self-motion and object motion, then depth can be computed from the component of image motion that is caused by self-motion.
In Aim 1, we test this hypothesis by asking subjects to judge the depth sign (near vs. far) of objects that are moving or stationary in the world. We hypothesize that subjects? depth judgements will be biased by object motion in the world, since recent studies suggest that flow parsing is not completely accurate. Our preliminary data support this hypothesis. Second, we consider that the brain may not be able to accurately isolate the component of image motion caused by self-motion because there is uncertainty in inferring whether or not objects are moving in the world. This leads us to hypothesize that the biases observed in Aim 1 can be explained by considering perception as joint inference of both depth and object motion in the world.
In Aim 2, we test this hypothesis by asking subjects to answer two questions: 1) is the object moving or stationary in the world? 2) is the object farther or nearer than the fixation point? We hypothesize that depth estimates will depend on the subject?s belief about object motion in the world, and should also depend systematically on the reliability of depth cues. Our preliminary results support predictions of a causal inference scheme, and we will compare the data to predictions of a Bayesian ideal observer model. This fellowship will provide the candidate training in computational and systems neuroscience through interactions with the research mentor, the broader neuroscience community at the University of Rochester, and formal coursework. The proposed research is consistent with NEI goals to ?understand how the brain processes visual information? (National Plan for Eye and Vision Research). In addition, the knowledge gained from this work may help us better understand the various neural and ophthalmological diseases that affect depth perception, and assist in the development of artificial vision and navigation systems.

Public Health Relevance

This proposal will examine, for the first time, how humans perceive the depth of moving objects during self- motion based on motion parallax cues, using a combination of psychophysical experiments and computational modelling. The knowledge to be gained from this work will help us to better understand neural and ophthalmological diseases that affect depth perception, and may assist in the development of artificial vision and navigation systems.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Individual Predoctoral NRSA for M.D./Ph.D. Fellowships (ADAMHA) (F30)
Project #
5F30EY031183-02
Application #
10076550
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Agarwal, Neeraj
Project Start
2020-01-01
Project End
2023-12-31
Budget Start
2021-01-01
Budget End
2021-12-31
Support Year
2
Fiscal Year
2021
Total Cost
Indirect Cost
Name
University of Rochester
Department
Other Basic Sciences
Type
Schools of Arts and Sciences
DUNS #
041294109
City
Rochester
State
NY
Country
United States
Zip Code
14627