When two people engage in dialogue, a substantial part of their message is conveyed by its intonation. Even when interlocutors are not visually co-present for conversation, listeners have access to a great deal of information beyond the spoken strings of consonants and vowels, including variation in rhythm, tune, tempo, loudness, tenseness and tone of voice. Intonation provides an organizational structure for speech, and can convey simultaneously a speaker's attitude, utterance purpose, and the relative importance of particular words or phrases. Intonation can mark the difference between immediately relevant vs. background information, express contrast, contradiction, and correction; and indicate the intended syntax of ambiguous utterances. Basic research into the relationship between intonation and speakers/hearers' intentions about syntax and information structure addresses whether, when, and how speakers use intonational information to signal linguistic and paralinguistic meaning as well as whether, how and when listeners use this information to recover meaning.
With National Science Foundation support, Dr. Shari Speer and Dr. Kiwako Ito will explore the way speakers and listeners use intonation to communicate information structure in spontaneous dialogues in English and Japanese. The cross-linguistic comparison addresses the question of whether prosody has the same kind of communicative function in languages that are syntactically and melodically very different. What aspects of intonation are most important for language understanding, and are these common across languages? Do speakers pronounce certain tunes because they will be helpful for listeners, or do they focus primarily on their own interpretation of a message? These questions are motivated by a more basic one: What is universal about intonation in human conversation, and how does it reflect the structure of cognitive function during language use?