Due to their different locations, our left and right eyes have slightly different views of the world. It has been known since the 1830's that a horizontal disparity (shift) between the position of an image on the retinas of the left and right eyes is sufficient for humans to see that image in depth. Combining the two retinal images allows us to see stereoscopically the full three-dimensional volume of scenes (and even some movies and comic books). Stereoscopic vision helps people to navigate, to locate, identify and grasp objects, to judge distances, and to drive vehicles. The goal of this research project is to understand how the brain interprets retinal disparities as cues to stereoscopic depth. Horizontal disparities are relative easy to interpret; this is because of the separation of the eyes, when we are in upright posture, is horizontal. However, in naturalistic scenes disparities can have any direction, not just horizontal, and interpreting them is complicated by the variety of spatial arrangements among objects can influence the disparity directions we encounter. Therefore, understanding how humans see depth requires that we understand how the brain analyzes this two-dimensional (horizontal and vertical) disparity signal. The investigator will measure the depth that people perceive as they view displays containing several patterns, each with its own disparity direction.

The knowledge gained from these studies will help scientists understand how humans and other animals combine images from the two eyes in order to recover information not available from either eye alone and how perception of 3-D space can be distorted when this information is combined incorrectly. Learning about the brain's strategies in achieving normal 3-D vision will help explain why visual areas of the brain are organized the way they are and aid in the design of treatments for impaired stereoscopic vision (from, for example, amblyopia and strabismus, or "lazy eye"). It will also help in the design of more effective artificial depth-sensing systems and improve machine-vision algorithms for object recognition, robotics, and visual prosthetic devices.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Type
Standard Grant (Standard)
Application #
1257096
Program Officer
Catherine Arrington
Project Start
Project End
Budget Start
2013-03-01
Budget End
2017-02-28
Support Year
Fiscal Year
2012
Total Cost
$435,907
Indirect Cost
Name
Syracuse University
Department
Type
DUNS #
City
Syracuse
State
NY
Country
United States
Zip Code
13244