Our eyes are never at rest.Even when e are .xating upon a visual target,small involuntary eye movements continuously perturb the projection of the image on the retina.A substantial body of recent evidence indicates that these .xational eye movements are an important component of the ay visual information is acquired and represented in the brain.Neurophysiological studies with monkeys have sho n that small eye movements strongly a .ect the responses of neurons in the visual system.In the research of a previous NSF award (EIA-0130851),e have shown that fixational instability is crucial for identifying stimuli presented for the brief durations that occur during natural viewing (Rucci and Desbordes,2003),and improves the efficiency of visual representations in models of the lateral geniculate nucleus and primary visual cortex (Rucci and Casile,2003b, 2004).Furthermore,reproduction of human eye movements in a robotic system has sho n that .xational instability may provide reliable information of distance (Santini and Rucci,2003). Building on our previous results and those of recent neurophysiological studies,this proposal continues part of the research of NSF a ard EIA-0130851.It describes a program of research that focuses on extracting and integrating multiple depth cues to develop a reliable 3D representation of the visual scene.This project follows an interdisciplinary approach that integrates computer modeling of the visual cortex with robotic experiments and measurements of eye movements in human subjects.The long-term goal of this research is to develop machine vision systems that,by emulating the computational principles of the brain,achieve high levels of robustness and e .ciency. The specific aims of this project are to: 1.Develop a model of the monkey primary visual cortex in which,following recent neurophysi- ological .ndings,distinct populations of neurons respond during di .erent phases of small eye movements. 2.(a)Analyze the information of relative distance transmitted by di .erent neuronal populations when the model is coupled with a robotic oculomotor orkstation that replicates human eye movements;and (b)integrate depth information provided by di .erent cues into a coherent 3D representation of the visual scene. 3.Investigate the self-organization of the model by means of learning,so that the extraction and integration of depth information is autonomously tuned to the physical and motor char- acteristics of the system. In this research,the eye movements of human observers will be measured by a high-resolution eye-tracker,and then accurately replicated by a robotic system (a head/eye system with t o mo- bile cameras)that e have speci .cally designed to reproduce the visual inputs to the eyes during oculomotor behavior.Visual images acquired in this ay will be applied as input to computational models of neurons in the striate cortex of primates.Intellectual merit: This project establishes a direct linkbet een human and machine vision studies.By focusing on the computational mecha- nisms by which eye movements a .ect visual processing,it has the potential of providing new insights on the brain as ell as opening the ay to the development of new algorithms in machine vision. Broader impact: The interdisciplinary nature of this research o .ers new opportunities for train- ing students.By collaborating with the PI,students in the Department of Cognitive and Neural Systems at Boston University,a department ith an important focus on computational modeling of the brain,will have an opportunity to combine their theoretical orkwith the development of machine vision systems.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0432104
Program Officer
Elliott Francis
Project Start
Project End
Budget Start
2004-10-01
Budget End
2008-09-30
Support Year
Fiscal Year
2004
Total Cost
$305,000
Indirect Cost
Name
Boston University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02215