Stereoscopic depth perception depends on the slight disparities (position shifts) between the retinal images of the left and right eyes. Knowing how the visual system detects these disparities and uses them to compute depth is not only essential for understanding human recognition of visual objects and perception of three-dimensional visual space, but also important for finding effective treatment for clinical cases of retinal correspondence deficits and for improving machine-vision algorithms for object recognition, robotics, and visual prosthetic devices. In the traditional view, stereopsis finds corresponding features of the retinal images of the two eyes through a one-dimensional (horizontal) matching process and converts the disparities of these features into perceived depth. In this proposal it is demonstrated that such a matching process, and any stereoscopic depth perception dependent on it, would fail in naturalistic visual scenes containing overlapping surfaces. New models are presented which use physiologically realistic neuronal elements to describe stereoscopic correspondence matching as a two-dimensional process. The models are developed and tested in research having four specific experimental aims: (1) to determine how stereoscopic performance depends on stimulus orientation and disparity direction; (2) to identify the stimulus primitives used in human stereoscopic matching; (3) to identify the stimulus and computational requirements of transparency perception; (4) to examine mechanisms of depth constancy operating during torsional rotation of the eyes. To achieve these aims a variety of traditional and novel psychophysical methods will be used. In many of the proposed experiments, these methods will be applied to a new class of visual stimuli: stereo plaids. Sharing characteristics of naturalistic, multi-object visual scenes, stereo plaids allow disparity detection and depth computation to be dissociated for separate experimental examination. The resulting data will reveal stereoscopic processes that have not previously been accessible for separate study.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY012286-04
Application #
6518611
Study Section
Visual Sciences B Study Section (VISB)
Program Officer
Oberdorfer, Michael
Project Start
1999-04-01
Project End
2004-03-31
Budget Start
2002-04-01
Budget End
2003-03-31
Support Year
4
Fiscal Year
2002
Total Cost
$279,603
Indirect Cost
Name
Syracuse University
Department
Miscellaneous
Type
Schools of Engineering
DUNS #
002257350
City
Syracuse
State
NY
Country
United States
Zip Code
13244
Farell, Bart; Chai, Yu-Chin; Fernandez, Julian M (2010) The horizontal disparity direction vs. the stimulus disparity direction in the perception of the depth of two-dimensional patterns. J Vis 10:25.1-15
Fernandez, Julian M; Farell, Bart (2009) A new theory of structure-from-motion perception. J Vis 9:23.1-20
Chai, Yu-Chin; Farell, Bart (2009) From disparity to depth: how to make a grating and a plaid appear in the same depth plane. J Vis 9:3.1-19
Farell, Bart; Chai, Yu-Chin; Fernandez, Julian M (2009) Projected disparity, not horizontal disparity, predicts stereo depth of 1-D patterns. Vision Res 49:2209-16
Fernandez, Julian Martin; Farell, Bart (2009) Is perceptual space inherently non-Euclidean? J Math Psychol 53:86-91
Fernandez, Julian Martin; Farell, Bart (2008) A neural model for the integration of stereopsis and motion parallax in structure-from-motion. Neurocomputing 71:1629-1641
Fernandez, Julian M; Farell, Bart (2007) Shape constancy and depth-order violations in structure from motion: a look at non-frontoparallel axes of rotation. J Vis 7:3.1-18
Pelli, Denis G; Burns, Catherine W; Farell, Bart et al. (2006) Feature detection and letter identification. Vision Res 46:4646-74
Farell, Bart (2006) Orientation-specific computation in stereoscopic vision. J Neurosci 26:9098-106
Fernandez, Julian Martin; Farell, Bart (2006) A reversed structure-from-motion effect for simultaneously viewed stereo-surfaces. Vision Res 46:1230-41

Showing the most recent 10 out of 21 publications