Binocular stereopsis is the ability to use differences between the images presented to the two eyes (binocular disparities) to perceive the three dimensional structure of the outside world. In order to detect that an object has a binocular disparity, it is first necessary to correctly match up the images of that object in the two eyes (the stereo correspondence problem). Humans are able to do this very robustly, even when the two eyes are shown random patterns generated by computers (random dot stereograms). The neural implementation of this correspondence process is incompletely understood. Current models suggest that the monocular images undergo substantial processing before any binocular comparison is possible, and that the final result of this monocular processing is a simple output (response rate of a single neuron). This processing integrates image information over finite regions of visual space. If binocular comparisons are made after this integration, then only a coarse spatial map of disparities should be visible. We recorded the activity of disparity selective neurons in the visual cortex of awake behaving animals, in response to sinusoidal variations in disparity over space, at different scales (spatial frequencies). The neuronal responses did indeed seem to be limited to a coarse spatial representation of disparity, limited by the size of each neuron's spatial area of integration. This coarse representation in the neuronal signals closely matches psychophysical measures of the ability to detect disparity changes over space. It has been known for may years that the human resolution for such disparity modulations is coarse, but this phenomenon has never been explained. Thus it appears that the mechanism by which disparity signals are generated early in cortical processing, as part of solving the stereo correspondence problem, accounts for the previously unexplained limitation of human stereo resolution.
Showing the most recent 10 out of 24 publications