Objects move through the environment in three dimensions, and humans are clearly capable of perceiving such motion in depth. Visual neuroscientists do not, however, know how our nervous system encodes this fundamental aspect of the visual world. Despite large literatures motion and depth perception, there is a surprising lack of knowledge integrating these two visual features to directly characterize how the brain processes the three-dimensional direction of moving objects. The goal of this proposed research is to understand how neural circuits in the primate brain exploit binocular information to represent the direction of objects moving through a 3D environment. First, we will psychophysically characterize the binocular cues to 3D motion. Recent work suggests that the perception of motion through depth relies on two binocular cues, one based on changing disparities over time, and one based on an inter-ocular comparison of velocities. Our overarching hypothesis is that the velocity-based cue is of great importance for the perception of 3D motion. We will therefore perform psychophysical experiments that isolate eye-specific motion signals and characterize them relative to (and in interaction with) disparity-based signals. These experiments will unpack the psychophysical building blocks and signatures of this important perceptual information, and refine visual displays used for neuroimaging and electrophysiological studies. Second, we will use neuroimaging to identify the neural circuits that process 3D motion. Taking stimuli and insights from our psychophysical experiments, we will perform fMRI experiments to visualize this processing in the human brain. Direction-selective adaptation protocols will be used to characterize both the disparity-based and velocity-based cues and their interactions, and to understand how (or if) these cues are integrated into a single (cue-independent) representation of 3D motion. These experiments will also assess how directly psychophysical assays of the system map on to neural signals. Third, we will perform electrophysiology to specify the underlying neural computations. Guided by the psychophysics and neuroimaging, we will perform single-neuron recordings to characterize the binocular neural signals that encode 3D motion. Recordings in V1 will employ multi-electrode arrays;recordings in MT will employ both multiple-tetrode and single-electrode awake preparations. This work will test the hypothesis that eye-specific motion signals are represented at the level of V1 and are then integrated (by single neurons) in MT. This 3D motion pathway may be exposed at slower speeds, and thus may be multiplexed in the same circuitry known to extract 2D/frontoparallel direction when assessed using faster speeds.

Public Health Relevance

Objects in the real world move through three dimensions, and humans rely on their ability to sense motion in depth in order to guide appropriate actions. A thorough understanding of the neural basis of the perception of 3D motion will prove valuable for the refinement of biomedical technologies involving the visually-guided control of robotic or prosthetic devices in a dynamic three-dimensional environment.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY020592-02
Application #
8316118
Study Section
Central Visual Processing Study Section (CVP)
Program Officer
Steinmetz, Michael A
Project Start
2011-09-01
Project End
2015-08-31
Budget Start
2012-09-01
Budget End
2013-08-31
Support Year
2
Fiscal Year
2012
Total Cost
$382,816
Indirect Cost
$116,316
Name
University of Texas Austin
Department
Biology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712