9617274 HENDERSON Visual perception is a dynamic process: The two-dimensional image of the world projected onto the receptor surface at the back of the eyeball (the retina) is in a constant state of change due to eye, head, body, and object movements. These changes lead to two-dimensional geometric transformations of the retinal image (e.g., a rectangular door projects a trapezoidal image when it is seen at an angle), size transformations of the image (e.g., an object projects a smaller image when farther away), and temporal gaps in the image (e.g., an object disappears from the image when it moves behind another object). In addition, because our eyes move in rapid, discrete jerks (called saccades), and because the visual system is effectively blind during those movements (information is acquired only during fixational pauses between the movements), the entire image disappears about three times every second. Yet, in spite of all of this change and variation, we experience a constant, stable visual world. This research is concerned with the way we are able to experience visual object constancy, particularly across saccadic eye movements, in the face of the highly fluctuating visual image with which we are presented. The theoretical framework guiding the research holds that the visual system maintains visual constancy for objects using two sorts of internal representations, a relatively abstract description of the object (i.e., an object type), and a more veridical representation that is tied to the object's spatial location and that includes more specific visual details about the object (i.e., an object token). Experiments will use the preview paradigm, in which participants will see one image during a first fixation, and a second image during a subsequent fixation. Since the ahange from the first to the second image will take place during a saccade, it will not be directly perceived. Manipulation of the similarity of the first image to the second will ma ke it possible to determine the nature of the information that the visual system acquires and retains across a saccade. The research will then be able to address five questions: (1) What kind of information is preserved across saccadic eye movements during dynamic identification of real- world objects? (2) How is the information about an object preserved across a saccade related to its spatial position? (3) Do object types and object tokens represent the same sorts of information? (4) What is the nature of the spatial reference frame(s) used to represent object position across a saccade? (5) Does object constancy operate similarly across saccadic eye movements and within a single eye fixation? An understanding of the processes involved in maintaining visual constancy across saccadic eye movements should allow for improved design of graphical display systems, including virtual reality systems, as well as for the development of improved artificial vision systems, particularly those that attempt to integrate visual information dynamically over time. Finally, this research should contribute toward understanding of visual impairments that lead to an inability to recognize objects and scenes (visual agnosia). ***

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
9617274
Program Officer
Jasmine V. Young
Project Start
Project End
Budget Start
1997-09-01
Budget End
2001-08-31
Support Year
Fiscal Year
1996
Total Cost
$209,993
Indirect Cost
Name
Michigan State University
Department
Type
DUNS #
City
East Lansing
State
MI
Country
United States
Zip Code
48824