People often need to judge the shapes and movements of 3-D objects from a distance. Images on the retinae are 2-D, but pattern and motion information contain cues about 3-D shapes. Building on our previous work we expect to make significant progress in understanding 3-D shape perception from these cues, and thus offer a prototype for how the brain extracts information from the world to infer environmental properties. When surfaces have texture patterns, deformations of these patterns in retinal images provide clues for the brain to judge 3-D shape. When signals from the eyes reach the first cortical area V1, they are processed by neurons that are selectively tuned to orientations and spatial frequencies. We parsed texture deformations into orientation flows and spatial frequency gradients, to show that particular orientation flows evoke percepts of specific 3-D shapes, whereas frequency gradients provide cues to relative depth. These results led to models of how later cortical neurons could extract texture patterns and signal 3-D shapes. We now propose to extend our approach beyond static objects. When objects change shape (e.g. by bending, coiling, or contracting) as they move (e.g. walk, tumble, crawl, hop, slide, or swim), changes in the retinal image create patterns of local velocities that provide additional cues to 3-D shape. Since any retinal image can result from projections of many different 3-D objects, the brain relies on prior assumptions to infer the correct shape. Previous studies have only looked at rigid objects and used shape inference models based either on the brain assuming that the object is rigid or that faster points are nearer. We will use novel stimuli that put these prior assumptions in conflict, and thus examine how the brain potentiates a prior. This section will culminate in a model for choosing between conflicting assumptions, something that is often required in perception and cognition. Next, we will use randomly deforming non-rigid 3-D waves to examine how global shape properties, e.g. symmetry, influence perceived object motions by selectively combining disparate outputs of motion sensitive neurons. These results will unveil interactions between the form and motion cortical systems. Finally, we will examine observers'percepts of dynamic shape changes that require more sophisticated analyses of retinal velocity patterns. Neurons in the motion sensitive cortical area MT respond to 1-D motion shear and compression/divergence, so we will extract these qualities and combine them into 2-D velocity patterns of divergence, rotation, and deformation. Neural filters formed by these patterns will be used to explain perceived changes in 3-D shapes. We will base our filters on responses of MT and later cortical neurons to our stimuli, measured in a parallel project. We will thus present the first neural model that can explain observers'percepts of both rigid and non-rigid textured objects. The performance of our model will be compared against the best computer-vision models on motion-capture data from real deforming objects. We expect this project to introduce new ideas, methods and results for understanding visual perception of 3-D shapes and its deficits in neurological patients.
Shape is probably the most important cue for recognizing objects, so neurological disorders related to shape- perception would severely impair the ability of such patients to function autonomously. This project will identify how the brain constructs correct and incorrect shape percepts of 3-D objects from texture and motion information. In addition, people with amblyopia, strabismus and convergence-insufficiency often have deficient stereo vision, so they find texture and motion information to be particularly useful in perceiving 3-D shapes, hence our work on separating velocity information into object motion, object shape, and shape deformation may be particularly useful to these people.
|Kremkow, Jens; Jin, Jianzhong; Komban, Stanley J et al. (2014) Neuronal nonlinearity explains greater visual spatial resolution for darks than lights. Proc Natl Acad Sci U S A 111:3170-5|
|Ennis, Robert; Cao, Dingcai; Lee, Barry B et al. (2014) Eye movements and the neural basis of context effects on visual sensitivity. J Neurosci 34:8119-29|
|Komban, Stanley Jose; Kremkow, Jens; Jin, Jianzhong et al. (2014) Neuronal and perceptual differences in the temporal processing of darks and lights. Neuron 82:224-34|
|Tam, Danny M; Shin, Ji; Li, Andrea (2013) Dominance of orientation over frequency in the perception of 3-D slant and shape. PLoS One 8:e64958|
|Jain, Anshul; Zaidi, Qasim (2013) Efficiency of extracting stereo-driven object motions. J Vis 13:|
|Fowler, Michelle L; Li, Andrea (2013) Effects of texture component orientation on orientation flow visibility for 3-D shape perception. PLoS One 8:e53556|
|Cohen, Elias H; Zaidi, Qasim (2013) Symmetry in context: salience of mirror symmetry in natural patterns. J Vis 13:22|
|Zaidi, Qasim; Victor, Jonathan; McDermott, Josh et al. (2013) Perceptual spaces: mathematical structures to neural mechanisms. J Neurosci 33:17597-602|
|Zaidi, Qasim; Ennis, Robert; Cao, Dingcai et al. (2012) Neural locus of color afterimages. Curr Biol 22:220-4|
|Komban, Stanley Jose; Alonso, Jose-Manuel; Zaidi, Qasim (2011) Darks are processed faster than lights. J Neurosci 31:8654-8|
Showing the most recent 10 out of 29 publications