Humans routinely and confidently base their physical actions on the visual perception of space. We step off curbs but not cliffs, merge successfully with oncoming traffic, and dice chicken without chopping off our fingers. How does the visual system build representations of the environment that are so reliable? Recent work has shown that visual performance is in many ways nearly optimal. An important example of this occurs when multiple types of visual information (such as stereo, perspective, and motion parallax) are present, and the scene's layout could be determined from any of them. This is often the case in natural vision. In this situation, the visual system often constructs a percept that not only uses all the sources of information, but averages them together to create the perceived scene, with the most reliable sources given the greatest weight in the average. In principle, such weighted averaging should affect not only the appearance of the scene, but also the performance of tasks that use the percept. It is not yet known whether this is the case. The first study in the proposal quantifies the improvement in performance, using high quality visual displays and a task that is important for driving. There are also situations in which different sources of information could, in principle, be combined to give an extra boost to performance, above and beyond the use of a weighted average. This can happen because different cues excel at providing different sorts of information about shape and distance. If the information from different cues could be combined before each cue is used to estimate various aspects of the scene layout, a """"""""nonlinear"""""""" improvement in performance could be realized. Does the visual system exploit this opportunity? The answer to this question is important for understanding the neural mechanisms of visual perception. The second study addresses this question by measuring performance in a task in which observers adjust the shapes of simulated objects. Finally, the visual system builds accurate percepts and is exquisitely sensitive to changes in spatial layout. This requires that the system be kept finely tuned. Any drift in its computational mechanisms must be quickly detected and corrected. How this is done is not understood, but there is reason to believe the visual system can compare the outputs from different mechanisms with each other, and recalibrate itself when discrepancies are found. We propose that this process can be understood using the same conceptual tools that have already been developed to understand cue combination. We exploit a depth recalibration phenomenon discovered forty years ago to test predictions about how fast different visual mechanisms will be recalibrated when they disagree with each other.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY013988-02
Application #
6736838
Study Section
Visual Sciences B Study Section (VISB)
Program Officer
Oberdorfer, Michael
Project Start
2003-05-01
Project End
2006-04-30
Budget Start
2004-05-01
Budget End
2005-04-30
Support Year
2
Fiscal Year
2004
Total Cost
$277,375
Indirect Cost
Name
University of Pennsylvania
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
042250712
City
Philadelphia
State
PA
Country
United States
Zip Code
19104
Caziot, Baptiste; Backus, Benjamin T (2015) Stereoscopic Offset Makes Objects Easier to Recognize. PLoS One 10:e0129101
Caziot, Baptiste; Valsecchi, Matteo; Gegenfurtner, Karl R et al. (2015) Fast perception of binocular disparity. J Exp Psychol Hum Percept Perform 41:909-16
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T (2014) Cue-recruitment for extrinsic signals after training with low information stimuli. PLoS One 9:e96383
Harrison, Sarah J; Backus, Benjamin T (2014) A trained perceptual bias that lasts for weeks. Vision Res 99:148-53
Jain, Anshul; Backus, Benjamin T (2013) Generalization of cue recruitment to non-moving stimuli: location and surface-texture contingent biases for 3-D shape perception. Vision Res 82:13-21
Harrison, Sarah J; Backus, Benjamin T (2012) Associative learning of shape as a cue to appearance: a new demonstration of cue recruitment. J Vis 12:
Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul (2011) Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias. Vision Res 51:978-86
Harrison, S J; Backus, B T (2010) Uninformative visual experience establishes long term perceptual bias. Vision Res 50:1905-11
Di Luca, Massimiliano; Ernst, Marc O; Backus, Benjamin T (2010) Learning to use an invisible visual signal for perception. Curr Biol 20:1860-3
Harrison, Sarah; Backus, Benjamin (2010) Disambiguating Necker cube rotation using a location cue: what types of spatial location signal can the visual system learn? J Vis 10:23

Showing the most recent 10 out of 24 publications