To interpret the world, the brain must parse the continuous stream of sensory input into a set of distinct, salient objects. This process of scene segmentation has been studied at the perceptual level in various sensory systems for almost a century, yet the computational principles and neural representations that underlie these processes remain poorly understood. This research project attacks the problem of scene segmentation within the visual system, using a bistable transparent motion depth illusion as a model system. In a hypothesis driven approach, we construct a simple model of motion segmentation that makes testable predictions of novel psychophysical and physiological phenomena. This model, which rests on a single central conjecture, the one-point-one-motion constraint, is tested in two ways.
Specific Aim 1 will test predictions at the perceptual level, examining the interaction between the transparent motion depth illusion and binocular disparity.
Specific Aim 2 will test predictions at the physiological level, elucidating the neural representation of segmented transparent motion in cortical area MT. Although the proposed research uses the visual system as an experimental platform, the principles we test are broadly relevant to theories of object identification, neural representation, and information processing in the auditory and somatosensory domains as well. ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
Predoctoral Individual National Research Service Award (F31)
Project #
5F31NS047117-03
Application #
7074672
Study Section
Special Emphasis Panel (ZRG1-F02B (20))
Program Officer
Chen, Daofen
Project Start
2004-06-21
Project End
2007-06-20
Budget Start
2006-06-21
Budget End
2007-06-20
Support Year
3
Fiscal Year
2006
Total Cost
$38,079
Indirect Cost
Name
Stanford University
Department
Biology
Type
Schools of Medicine
DUNS #
009214214
City
Stanford
State
CA
Country
United States
Zip Code
94305