Our ability to navigate, to locate, identify and grasp objects, to judge distances, and to drive vehicles owes much to our having two forward-pointing eyes. Stereoscopic depth perception depends on the slight disparities (position shifts) between the retinal images of the left and right eyes. Knowing how the visual system detects these disparities and uses them to calculate depth is essential for understanding how humans recognize objects and localize them in three-dimensional visual space. It is also essential for finding effective treatment for clinical cases of retinal-correspondence deficits and for improving machine vision algorithms for object recognition, robotics, and visual prosthetic devices. Much has been discovered about the basis for human stereoscopic depth perception and the use of depth cues to infer the shape of objects. The object of this proposal is to update our understanding of how stereo information is computed in light of new evidence about the mechanism responsible. This evidence shows that the way disparities are computed is much more dependent on the detailed spatial properties of the stimuli than previously thought; a visual display may therefore contain only sub-optimal reference stimuli with respect to which the disparity of the target stimulus can be computed. Optimizing the disparity computation therefore requires sophisticated selection of the best reference stimulus consistent with the properties and capacities of the mechanism that carries out the comparison between reference and target stimuli. This project attempts to determine the properties of the mechanism that computes relative disparity and to characterize the reference-stimulus selection process. Psychophysical measures will be used on stimuli that are designed to reveal the basics of stereo computations and also those that will allow a generalization to natural-image objects and real-world tasks. ? ?

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY012286-07
Application #
7110230
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Oberdorfer, Michael
Project Start
1999-04-01
Project End
2010-08-31
Budget Start
2006-09-01
Budget End
2007-08-31
Support Year
7
Fiscal Year
2006
Total Cost
$316,195
Indirect Cost
Name
Syracuse University
Department
Miscellaneous
Type
Schools of Engineering
DUNS #
002257350
City
Syracuse
State
NY
Country
United States
Zip Code
13244
Farell, Bart; Chai, Yu-Chin; Fernandez, Julian M (2010) The horizontal disparity direction vs. the stimulus disparity direction in the perception of the depth of two-dimensional patterns. J Vis 10:25.1-15
Fernandez, Julian M; Farell, Bart (2009) A new theory of structure-from-motion perception. J Vis 9:23.1-20
Chai, Yu-Chin; Farell, Bart (2009) From disparity to depth: how to make a grating and a plaid appear in the same depth plane. J Vis 9:3.1-19
Farell, Bart; Chai, Yu-Chin; Fernandez, Julian M (2009) Projected disparity, not horizontal disparity, predicts stereo depth of 1-D patterns. Vision Res 49:2209-16
Fernandez, Julian Martin; Farell, Bart (2009) Is perceptual space inherently non-Euclidean? J Math Psychol 53:86-91
Fernandez, Julian Martin; Farell, Bart (2008) A neural model for the integration of stereopsis and motion parallax in structure-from-motion. Neurocomputing 71:1629-1641
Fernandez, Julian M; Farell, Bart (2007) Shape constancy and depth-order violations in structure from motion: a look at non-frontoparallel axes of rotation. J Vis 7:3.1-18
Pelli, Denis G; Burns, Catherine W; Farell, Bart et al. (2006) Feature detection and letter identification. Vision Res 46:4646-74
Farell, Bart (2006) Orientation-specific computation in stereoscopic vision. J Neurosci 26:9098-106
Fernandez, Julian Martin; Farell, Bart (2006) A reversed structure-from-motion effect for simultaneously viewed stereo-surfaces. Vision Res 46:1230-41

Showing the most recent 10 out of 21 publications