This project investigates how prosody is used in spoken language to signal how words are grouped into phrases and how each word and phrase contributes information to the discourse. In text these functions are signaled through punctuation and text enhancement (eg., boldface), but in spoken language pitch, timing, loudness, voice quality, and other properties of speech convey such information. Linguists propose that underlying the prosody of all languages is a universal structure, the Headed Constituent (HC), which groups syllables and words together, with one prominent element per constituent designated the Head element. These constituents determine how speech sounds are coordinated (at a physical level), and how speech is mapped onto syntactic and semantic structures that determine utterance meaning.
Prosodic constituents are examined in English, Spanish, and French, three languages that are known to differ not only in their prosody (eg., intonation and rhythmic patterns), but also in the associations linking prosody to syntax and semantics. Experiments conducted in Illinois, Barcelona and Lyon will show how how listeners perceive the prosodic phrasing and prominence patterns of an utterance when presented with speech samples that differ in their phonetic properties (pitch and timing), and in syntactic and semantic features. A novel method of real-time, auditory transcription with non-expert listeners and conversational speech samples is intended to best approximate conditions of normal language use. Parallel experiments on English, Spanish, and French will provide critical evidence regarding the universality of the Headed Constituent as the structure that underlies prosodic form, and will also shed light on differences among languages in the role of prosody in communicating linguistic meaning.
Project findings will contribute valuable benchmark data on how prosody functions in conversational speech, with future applications in clinical settings to identify speech disorders involving prosody, in second language teaching, and in development of speech technologies for human-computer interfaces.