Humans routinely and confidently base their physical actions on the visual perception of space. We step off curbs but not cliffs, merge successfully with oncoming traffic, and dice chicken without chopping off our fingers. How does the visual system build representations of the environment that are so reliable? Recent work has shown that visual performance is in many ways nearly optimal. An important example of this occurs when multiple types of visual information (such as stereo, perspective, and motion parallax) are present, and the scene's layout could be determined from any of them. This is often the case in natural vision. In this situation, the visual system often constructs a percept that not only uses all the sources of information, but averages them together to create the perceived scene, with the most reliable sources given the greatest weight in the average. In principle, such weighted averaging should affect not only the appearance of the scene, but also the performance of tasks that use the percept. It is not yet known whether this is the case. The first study in the proposal quantifies the improvement in performance, using high quality visual displays and a task that is important for driving. There are also situations in which different sources of information could, in principle, be combined to give an extra boost to performance, above and beyond the use of a weighted average. This can happen because different cues excel at providing different sorts of information about shape and distance. If the information from different cues could be combined before each cue is used to estimate various aspects of the scene layout, a """"""""nonlinear"""""""" improvement in performance could be realized. Does the visual system exploit this opportunity? The answer to this question is important for understanding the neural mechanisms of visual perception. The second study addresses this question by measuring performance in a task in which observers adjust the shapes of simulated objects. Finally, the visual system builds accurate percepts and is exquisitely sensitive to changes in spatial layout. This requires that the system be kept finely tuned. Any drift in its computational mechanisms must be quickly detected and corrected. How this is done is not understood, but there is reason to believe the visual system can compare the outputs from different mechanisms with each other, and recalibrate itself when discrepancies are found. We propose that this process can be understood using the same conceptual tools that have already been developed to understand cue combination. We exploit a depth recalibration phenomenon discovered forty years ago to test predictions about how fast different visual mechanisms will be recalibrated when they disagree with each other.
Showing the most recent 10 out of 24 publications