Spoken dialogue is 1 of the most basic forms of human language, ubiquitous across cultures. It differs in form from written language and read speech, with different vocabulary usage, different syntax, and an expanded use of visually available context. Most importantly, speakers and hearers of dialogue must rely on intonation, or the 'melody in speech.' When we talk, we manipulate the pitch, rate, phrasing, and volume of our speech. Patterns of intonation across a conversation can indicate complex discourse information not available from the words alone, such as what is already known by both speakers, what is newly introduced to 1 or both of them, which information they are finished discussing and what is still to be talked about. Although intonational structuring of discourse information is reported for numerous languages, a theory of the general cognitive mechanism underlying the universal use of intonation has yet to be established. The cross-linguistic research proposed here is crucial for the development of a general theory of intonation use in human language processing. The focus on analyses of unscripted conversational speech provides the most accurate information available about basic human language performance. Studying spontaneous speech has been considered an intractable problem, because it is hard to predict the specific words and sentence structures a speaker will use. We have piloted novel methodology that allows collection of multiple tokens of like utterances from the same speaker in varying intonational conditions. To understand how listeners respond in conversation, we use head-mounted eye-movement monitoring, an immediate, implicit measure of comprehension that allows the listener to speak and move while looking at the objects described by a conversational partner. The comparisons of English and Japanese, 2 languages that differ substantially in their syntax and intonation, test whether intonation is used differently in a language that provides melodic cues more or less reliably, and in different physical forms. Understanding how consistently intonation marks the information status of words and whether intonational cues facilitate listeners' comprehension of messages is important, not only for theories of language processing and development, but also for accurate speech identification and generation systems in artificial intelligence, and for the development of effective diagnoses and therapies for aphasic patients and others with communication loss. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC007090-02
Application #
7253196
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
2006-07-01
Project End
2011-06-30
Budget Start
2007-07-01
Budget End
2008-06-30
Support Year
2
Fiscal Year
2007
Total Cost
$216,654
Indirect Cost
Name
Ohio State University
Department
Miscellaneous
Type
Schools of Arts and Sciences
DUNS #
832127323
City
Columbus
State
OH
Country
United States
Zip Code
43210
Ito, Kiwako; Speer, Shari R (2008) Anticipatory effects of intonation: Eye movements during instructed visual search. J Mem Lang 58:541-573