In the cognitive, neural and behavioral sciences, there are many research projects that study language processing and there are many research projects that study visual perception. However, few research projects study both language and vision, and how the two interact in real-time. This is true despite the fact that a large portion of our everyday perceptual/cognitive experiences requires a coordination of language and vision. The objective of this research project is to study the on-line processes of human information integration between language comprehension and visual perception. Not only will this work point to a number of examples where language and vision interact more fluidly than previously thought, but it will also test the limits of this interaction. Finding what particular linguistic representations get interfaced with what particular visual representations will provide insight toward a fundamental architecture of information integration. This architecture is explored with a localist attractor network, using feedback projections from integrated representations back to the modality-specific representations, which will attempt to quantitatively simulate the experimental results. A series of experiments examines how incremental language comprehension constrains visual perception and attention in real-time. In a recent study of visual search, we have found that the incremental spoken delivery of target features causes the search for a conjunction target to behave more like a nested pair of single-feature searches -- dramatically reducing the slope of the reaction time X set size function. Thus, it appears that the language system is able to communicate with the visual system quickly enough to guide search upon hearing just the first of a pair of spoken target features. By accompanying the audio/visual search experiments with neural network simulations of the search process, a cyclic interplay between theory and data is maintained. New experimental results elicit improvements of the model, and additional results from the improved model elicit new hypotheses that can be tested experimentally. These two projects promise to reveal many important aspects of the temporal dynamics of visual and linguistic information integration, thus constraining our cognitive, neural and behavioral theories of the mutual interaction between language comprehension and visual perception.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Research Project (R01)
Project #
5R01MH063961-03
Application #
6778333
Study Section
Biobehavioral and Behavioral Processes 3 (BBBP)
Program Officer
Kurtzman, Howard S
Project Start
2002-09-01
Project End
2006-07-31
Budget Start
2004-08-11
Budget End
2006-07-31
Support Year
3
Fiscal Year
2004
Total Cost
$78,500
Indirect Cost
Name
Cornell University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
872612445
City
Ithaca
State
NY
Country
United States
Zip Code
14850
Farmer, Thomas A; Cargill, Sarah A; Spivey, Michael J (2007) Gradiency and Visual Context in Syntactic Garden-Paths. J Mem Lang 57:570-595