As we move through our visual environment, the pattern of light that enters our eyes is strongly shaped by the properties of objects within the environment, their motion relative to each other, and our own motion relative to the external world. This collaborative project will quantify motion within natural scenes, record activity from populations of neurons in the early visual pathway in response to the motion, and develop models of motion representation across neuronal populations. The primary goals of the work are to fully characterize the biological representation of motion in natural scenes in the early stages of visual processing that sets the stage for cortical computation critical for visual perception, and to unify the biological findings with computational models of motion from the computer vision community.

The perception of visual motion is critical for both biological and computer vision systems. Motion reveals structure of the world including the relative and absolute depths of objects, surface boundaries between objects and information about ego-motion and the independent motion of other objects. The effects of visual motion on the relationship between spatially localized and global properties of the natural visual scene, and how this is represented by the early visual pathway of the brain, are largely unknown.

This project addresses the computation of local and global properties of natural visual scenes by both distributed neural systems and computer vision algorithms using a novel set of complex naturalistic stimuli in which ground truth properties of the scene are known, and all aspects of the scene, including its reflectance, surface properties, lighting and motion are under investigator control. A unified probabilistic modeling framework will be adopted, that ties together the computational and biological models of properties of the natural scene. Neural activity will be recorded from a large population of densely sampled single neurons from the visual thalamus. From the perspective of the computer vision community, an important challenge exists in inferring the motion of the external environment (or "optical flow") from sequences of 2D images. From the perspective of the neuroscience community, quantifying the distributed neural representation of luminance and motion in the early visual pathway will be a critical step in understanding how scene information is extracted and prepared for processing in higher visual centers. A team of investigators with experience in computer science, engineering, and neuroscience will develop a theoretical foundation and rich set of methods for the representation and recovery of local luminance, local motion boundaries and global motion by brains and machines.

Project Report

The overall goal of this project was to better understand our natural visual environment, from both the biological and computer vision perspectives. Although the visual pathway of the brain is probably the most well studied pathway in the human brain, most of what we know about it stems from experiments that involve very simple and artificial kinds of visual objects, and our understanding of how our visual system works in the real world is surprisingly limited. In parallel, a number of engineered tools are available for automating some very important aspects of vision (locating objects within a scene, recognizing objects, etc), again these kinds of approaches really break down in the natural environment that we live in. Our team utilized an array of tools, from computational neuroscience, basic visual neurobiology, and computer vision, to approach these problems from a number of different angles, to develop a framework from which the scientific community might be able to make some headway in understanding the natural visual environment that is important in navigating through life on an every-day basis. On the biological side, we uncovered and quantified the importance of precise timing in capturing important elements like motion in the visual scene, that reflects the important tie between the natural time scales of our visual environment (how fast do things change?), the spatial structure of our visual environment, and the anatomical and biophysical properties of the visual pathway that has evolved to process this information efficiently. On the computer vision side, we developed a database of naturalistic movies as an important (and previously missing) benchmark for the computer vision community. Specifically, we created a set of movies for which everything about the natural scene is known (and under our control). This is important because we and other laboratories that work on algorithms for pulling information out of the natural visual environment need to have a way to know when we "get it right", and how artificial computer algorithms fail in complicated environments. The database was made publicly available for a grand challenge of optical flow algorithms at the website: http://sintel.is.tue.mpg.de. Taken together, we believe that these parallel approaches within our team acted synergistically, using computer vision perspectives to help us better understand biological vision, and using biological perspectives to help us better understand how to develop artificial computer algorithms in the spirit of trying to automate tasks that we as humans take for granted.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0904727
Program Officer
Kenneth C. Whang
Project Start
Project End
Budget Start
2009-10-01
Budget End
2012-09-30
Support Year
Fiscal Year
2009
Total Cost
$211,564
Indirect Cost
Name
Suny College of Optometry
Department
Type
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10036