Objects move through three dimensions, and the accurate perception of motion through depth is a key element underlying many human behaviors. From everyday activities like driving a car or shaking hands, to specialized skills like performing surgery or hitting a baseball, seeing the 3D trajectory of a moving object is a critical and central perceptual capacity. Although much research has focused on how the brain processes motion on flat (2D) surfaces, there is surprisingly little knowledge regarding how cues to depth are combined with motion signals to represent 3D motions. The goals of this project are therefore to identify and characterize the neural mechanisms involved in representing 3D direction of motion. Because a human's eyes are horizontally offset within the head, the two eyes view the visual world with some slight difference. The visual system must exploit the dynamic pattern of differences between the two eye's views to extract the direction of 3D motion. With support from a National Science Foundation CAREER award, Dr. Alex Huk and colleagues at the University of Texas at Austin will perform a series of behavioral experiments to identify which pieces of binocular information are used to represent 3D motion. A series of functional magnetic resonance imaging experiments will then use the same experimental displays to identify the resulting signals in relevant parts of the human brain, including primary visual cortex, the middle temporal area (an area known to process 2D motion), and subregions within the posterior parietal lobe. To more directly link perceptual experiences and brain activity, measurements of perceptual sensitivity to particular forms of 3D motion will then be quantitatively compared to measurements of neural sensitivity to these same motions.

These studies will provide a thorough characterization of how the brain processes visual motion in realistic environments, extending the careful behavioral and neural studies of 2D motion and static depth processing to a dynamic 3D world. The results will not only facilitate the integration and extension of current understanding of model subsystems within the visual cortex, but will more generally characterize some of the ways by which the nervous system represents information that is fundamentally complex and multidimensional. Likewise, this work may enable the development of 3D visual display technologies that are better suited to human visual capabilities. The CAREER award will support the training of undergraduate, graduate, and postdoctoral researchers, both in the classroom and the laboratory. It will also facilitate the development of compelling visual demonstrations at the heart of educational outreach efforts in high schools in both urban and rural areas around Austin.

Project Report

Our research activities have focused on intertwined behavioral (psychophysics) and neuroimaging (fMRI) experiments in humans as they view dynamic 3D visual displays. We discovered that people can discriminate the direction that objects move through depth (i.e., towards or away from the observer) even under conditions for which they cannot discriminate the position in depth of the very same object. This dissociation between (impaired) position-in-depth perception and (unimpaired) movement-through-depth perception suggests that 3D motion perception does not depend exclusively on estimating the depth of an object and then figuring out how that depth changes over time: instead, the visual system appears to do a more direct comparison between the velocities as seen by the two eyes. Because the two eyes are horizontally offset, they see slightly different versions of the visual scene. This can be witnessed by taking your thumb and moving it directly towards and away from your nose: as it approaches the face, your right eye "sees" leftward motion and your left eye "sees" rightward motion (and vice versa for receding motion). We studied this eye/velocity motion information and how it flows through the brain, and found that it is processed by a brain region in the visual system (named "Area MT") which was long believed to process only frontoparallel motion (up/down/left/right) and position-in-depth. This work has also guided the development of visual display demos for teaching and outreach. We have developed state-of-the-art animations for our public presentations, and host this content on our own YouTube channel.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
0748413
Program Officer
Akaysha Tang
Project Start
Project End
Budget Start
2008-05-15
Budget End
2014-04-30
Support Year
Fiscal Year
2007
Total Cost
$550,000
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78712