When we talk to one another, we don't just talk. We gesture, point, look at objects, and touch them; and our speech often refers to objects in our environment. This is so prevalent that we gesture when we're on the phone, and we describe things differently depending on whether or not the people we're talking to can see us. If speech is not an isolated cognitive activity, but depends on integrating information from other processes, then how does this integration take place? How do speakers and listeners integrate language, as it is being performed, with other streams of information from the visual environment?

Work on gesture analysis and situated cognition has recently provided a good deal of evidence to address this question. Some of this evidence has come from work by Dr. Sweetser and Ms. Narayan on fine-grained linguistic analyses of utterances and their accompanying bodily movements. With support from NSF, Ms. Narayan will continue this line of research to investigate the role of timing in the integration of speech and non-speech processes. The experiments promise to bring together work in linguistics and psychology to bear on the question of how we think when we talk. The results may lead to more effective methods of communication in teaching, and may lead to insights into the mechanisms of sign language. The experiments being funded are part of Ms. Narayan's dissertation work.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Type
Standard Grant (Standard)
Application #
0450957
Program Officer
Christopher T. Kello
Project Start
Project End
Budget Start
2005-09-01
Budget End
2006-08-31
Support Year
Fiscal Year
2004
Total Cost
$12,000
Indirect Cost
Name
University of California Berkeley
Department
Type
DUNS #
City
Berkeley
State
CA
Country
United States
Zip Code
94704