People often need to judge the shapes and movements of 3-D objects from a distance. Images on the retinae are 2-D, but pattern and motion information contain cues about 3-D shapes. Building on our previous work we expect to make significant progress in understanding 3-D shape perception from these cues, and thus offer a prototype for how the brain extracts information from the world to infer environmental properties. When surfaces have texture patterns, deformations of these patterns in retinal images provide clues for the brain to judge 3-D shape. When signals from the eyes reach the first cortical area V1, they are processed by neurons that are selectively tuned to orientations and spatial frequencies. We parsed texture deformations into orientation flows and spatial frequency gradients, to show that particular orientation flows evoke percepts of specific 3-D shapes, whereas frequency gradients provide cues to relative depth. These results led to models of how later cortical neurons could extract texture patterns and signal 3-D shapes. We now propose to extend our approach beyond static objects. When objects change shape (e.g. by bending, coiling, or contracting) as they move (e.g. walk, tumble, crawl, hop, slide, or swim), changes in the retinal image create patterns of local velocities that provide additional cues to 3-D shape. Since any retinal image can result from projections of many different 3-D objects, the brain relies on prior assumptions to infer the correct shape. Previous studies have only looked at rigid objects and used shape inference models based either on the brain assuming that the object is rigid or that faster points are nearer. We will use novel stimuli that put these prior assumptions in conflict, and thus examine how the brain potentiates a prior. This section will culminate in a model for choosing between conflicting assumptions, something that is often required in perception and cognition. Next, we will use randomly deforming non-rigid 3-D waves to examine how global shape properties, e.g. symmetry, influence perceived object motions by selectively combining disparate outputs of motion sensitive neurons. These results will unveil interactions between the form and motion cortical systems. Finally, we will examine observers'percepts of dynamic shape changes that require more sophisticated analyses of retinal velocity patterns. Neurons in the motion sensitive cortical area MT respond to 1-D motion shear and compression/divergence, so we will extract these qualities and combine them into 2-D velocity patterns of divergence, rotation, and deformation. Neural filters formed by these patterns will be used to explain perceived changes in 3-D shapes. We will base our filters on responses of MT and later cortical neurons to our stimuli, measured in a parallel project. We will thus present the first neural model that can explain observers'percepts of both rigid and non-rigid textured objects. The performance of our model will be compared against the best computer-vision models on motion-capture data from real deforming objects. We expect this project to introduce new ideas, methods and results for understanding visual perception of 3-D shapes and its deficits in neurological patients.

Public Health Relevance

Shape is probably the most important cue for recognizing objects, so neurological disorders related to shape- perception would severely impair the ability of such patients to function autonomously. This project will identify how the brain constructs correct and incorrect shape percepts of 3-D objects from texture and motion information. In addition, people with amblyopia, strabismus and convergence-insufficiency often have deficient stereo vision, so they find texture and motion information to be particularly useful in perceiving 3-D shapes, hence our work on separating velocity information into object motion, object shape, and shape deformation may be particularly useful to these people.

National Institute of Health (NIH)
National Eye Institute (NEI)
Research Project (R01)
Project #
Application #
Study Section
Special Emphasis Panel (SPC)
Program Officer
Wiggs, Cheri
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
State College of Optometry
Schools of Optometry/Ophthalmol
New York
United States
Zip Code
Wool, Lauren E; Crook, Joanna D; Troy, John B et al. (2018) Nonselective Wiring Accounts for Red-Green Opponency in Midget Ganglion Cells of the Primate Retina. J Neurosci 38:1520-1540
Koch, Erin; Baig, Famya; Zaidi, Qasim (2018) Picture perception reveals mental geometry of 3D scene inferences. Proc Natl Acad Sci U S A 115:7807-7812
Koch, Erin; Jin, Jianzhong; Alonso, Jose M et al. (2016) Functional implications of orientation maps in primary visual cortex. Nat Commun 7:13529
Jansen, Michael; Giesel, Martin; Zaidi, Qasim (2016) Segregating animals in naturalistic surroundings: interaction of color distributions and mechanisms. J Opt Soc Am A Opt Image Sci Vis 33:A273-82
Bachy, Romain; Zaidi, Qasim (2016) Properties of lateral interaction in color and brightness induction. J Opt Soc Am A Opt Image Sci Vis 33:A143-9
Wool, Lauren E; Komban, Stanley J; Kremkow, Jens et al. (2015) Salience of unique hues and implications for color theory. J Vis 15:
Dul, Mitchell; Ennis, Robert; Radner, Shira et al. (2015) Retinal adaptation abnormalities in primary open-angle glaucoma. Invest Ophthalmol Vis Sci 56:1329-34
Zhao, Linxi; Sendek, Caroline; Davoodnia, Vandad et al. (2015) Effect of Age and Glaucoma on the Detection of Darks and Lights. Invest Ophthalmol Vis Sci 56:7000-6
Jansen, Michael; Li, Xiaobing; Lashgari, Reza et al. (2015) Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex. Cereb Cortex 25:3877-93
Kremkow, Jens; Jin, Jianzhong; Komban, Stanley J et al. (2014) Neuronal nonlinearity explains greater visual spatial resolution for darks than lights. Proc Natl Acad Sci U S A 111:3170-5

Showing the most recent 10 out of 44 publications