The goal of the proposed research is to understand how the human visual system resolves the inherent geometric ambiguities associated with most visual cues to depth. The brain can resolve cue ambiguity in two ways, (1) by applying prior knowledge of ecological constraints on those variables (e.g. that figures tend to be symmetric) and (2) by cooperatively using the information from other sensory cues to disambiguate their values. The first two principal aims focus on the first part of the problem. They are shaped by the observation that much of the statistical structure that makes monocular cues to depth informative is categorical in nature - motions are rigid or not, figures are symmetric or not, textures are homogeneous or not, etc.. We will study how the visual system combines information from multiple cues to disambiguate which of the several possible prior constraints to use when interpreting a cue. Casting the problem within a Bayesian framework provides a formal system for modeling robust cue integration, which allows the visual system to effectively deal with large conflicts between sensory cues. We will perform experiments to test the Bayesian model against other models of robust cue integration. The model also provides a framework for characterizing how the brain adapts its internal models of the prior statistics that make monocular cues informative. We will study how human observers use the information obtained by combining multiple cues to adapt these internal models and how this impacts how they integrate cues to estimate surface orientation and shape. The final principal aim tests whether and how the brain uses non-visual information (haptic / kinesthetic) derived from active movement and exploration of objects to disambiguate scene properties on which visual cues depend. The research will focus on three monocular visual cues about surface orientation and shape- figure shape, texture and motion - and how the brain combines these cues with stereoscopic cues. The psychophysics is motivated by and will be coupled with computational modeling of ideal Bayesian models for visual cue integration, learning and multi-modal cue integration. The results of the proposed research will elucidate the types of statistical inferences that are built into the neural computations underlying visual depth perception and define the limits of these computations. This will ultimately direct and constrain future studies of the neural mechanisms underlying vision.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY017939-05
Application #
8123262
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Steinmetz, Michael A
Project Start
2007-08-01
Project End
2013-07-31
Budget Start
2011-08-01
Budget End
2013-07-31
Support Year
5
Fiscal Year
2011
Total Cost
$295,680
Indirect Cost
Name
University of Rochester
Department
Miscellaneous
Type
Schools of Arts and Sciences
DUNS #
041294109
City
Rochester
State
NY
Country
United States
Zip Code
14627
Issen, Laurel; Huxlin, Krystel R; Knill, David (2015) Spatial integration of optic flow information in direction of heading judgments. J Vis 15:14
Dieter, Kevin C; Hu, Bo; Knill, David C et al. (2014) Kinesthesis can make an invisible hand visible. Psychol Sci 25:66-75
Kwon, Oh-Sang; Knill, David C (2013) The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning. Proc Natl Acad Sci U S A 110:E1064-73
Hu, Bo; Knill, David C (2011) Binocular and monocular depth cues in online feedback control of 3D pointing movement. J Vis 11:
Moreno-Bote, Ruben; Knill, David C; Pouget, Alexandre (2011) Bayesian sampling in visual perception. Proc Natl Acad Sci U S A 108:12491-6
Hu, Bo; Knill, David C (2010) Kinesthetic information disambiguates visual motion signals. Curr Biol 20:R436-7
Seydell, Anna; Knill, David C; Trommershauser, Julia (2010) Adapting internal statistical models for interpreting visual cues to depth. J Vis 10:1.1-27
Greenwald, Hal S; Knill, David C (2009) Orientation disparity: a cue for 3D orientation? Neural Comput 21:2581-604