The long-term objective of the proposed work is to understand how learning by the visual system helps it to represent the immediate environment during perception. Because perception is accurate, we can know spatial layout: the shapes, orientations, sizes, and spatial locations of the objects and surfaces around us. But this accuracy requires that the visual system learn over time how best to interpret visual """"""""cues"""""""". These cues are the signals from the environment that the visual system extracts from the retinal images that are informative about spatial layout. Known cues include binocular disparity, texture gradients, occlusion relations, motion parallax, and familiar size, to name a few. How do these cues come to be interpreted correctly? A fundamental problem is that visual cues are ambiguous. Even if cues could be measured exactly (which they cannot, the visual system being a physical device) there would still be different possible 3D interpretations for a given set of cues. As a result, the visual system is forced to operate probabilistically: the way things """"""""look"""""""" to us reflects an implicit guess as to which interpretation of the cues is most likely to be correct. Each additional cue helps improve the guess. For example, the retinal image of a door could be interpreted as a vertical rectangle or as some other quadrilateral at a non-vertical orientation in space, and the shadow cues at the bottom of the door helps the system know that it's a vertical rectangle. What mechanisms do the visual system use to discern which cues are available for interpreting images correctly? The proposed work aims to answer this fundamental question about perceptual learning. It was recently shown that the visual system can detect and start using new cues for perception. This phenomenon can be studied in the laboratory using classical conditioning procedures that were previously developed to study learning in animals. In the proposed experiments, a model system is used to understand details about when this learning occurs and what is learned. The data will be compared to predictions based on older, analogous studies in the animal learning literature, and interpreted in the context of Bayesian statistical inference, especially machine learning theory. The proposed work benefits public health by characterizing the brain mechanisms that keep visual perception accurate. These mechanisms are at work in the many months during which a person with congenital cataracts learns to use vision after the cataracts are removed, and it is presumably these mechanisms that go awry when an individual with a family history of synesthesia or autism develops anomalous experience-dependent perceptual responses. Neurodegenerative diseases may disrupt visual learning, in which case visual learning tests could be used to detect disease;understanding the learning of new cues in human vision could lead to better computerized aids for the visually impaired;and knowing what causes a new cue to be learned could lead to new technologies for training people to perceive accurately in novel work environments.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY013988-06
Application #
7911700
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Steinmetz, Michael A
Project Start
2002-04-01
Project End
2012-08-31
Budget Start
2010-09-01
Budget End
2011-08-31
Support Year
6
Fiscal Year
2010
Total Cost
$226,559
Indirect Cost
Name
State College of Optometry
Department
Ophthalmology
Type
Schools of Optometry/Ophthalmol
DUNS #
152652764
City
New York
State
NY
Country
United States
Zip Code
10036
Caziot, Baptiste; Backus, Benjamin T (2015) Stereoscopic Offset Makes Objects Easier to Recognize. PLoS One 10:e0129101
Caziot, Baptiste; Valsecchi, Matteo; Gegenfurtner, Karl R et al. (2015) Fast perception of binocular disparity. J Exp Psychol Hum Percept Perform 41:909-16
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T (2014) Cue-recruitment for extrinsic signals after training with low information stimuli. PLoS One 9:e96383
Harrison, Sarah J; Backus, Benjamin T (2014) A trained perceptual bias that lasts for weeks. Vision Res 99:148-53
Jain, Anshul; Backus, Benjamin T (2013) Generalization of cue recruitment to non-moving stimuli: location and surface-texture contingent biases for 3-D shape perception. Vision Res 82:13-21
Harrison, Sarah J; Backus, Benjamin T (2012) Associative learning of shape as a cue to appearance: a new demonstration of cue recruitment. J Vis 12:
Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul (2011) Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias. Vision Res 51:978-86
Harrison, Sarah; Backus, Benjamin (2010) Disambiguating Necker cube rotation using a location cue: what types of spatial location signal can the visual system learn? J Vis 10:23
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T (2010) Absence of cue-recruitment for extrinsic signals: sounds, spots, and swirling dots fail to influence perceived 3D rotation direction after training. PLoS One 5:e13295
Jain, Anshul; Backus, Benjamin T (2010) Experience affects the use of ego-motion signals during 3D shape perception. J Vis 10:

Showing the most recent 10 out of 24 publications