The speech humans produce every day in casual conversation is incredibly varied, with sounds and whole syllables changed or missing. American English listeners notice nothing unusual when hearing such "reduced speech" in context; however, second-language speakers and even listeners from other English-speaking countries often find American English reduced speech difficult to understand. The current research centers on how speakers and listeners use reduced, spontaneous speech across languages and dialects, and on how such speech may hinder or even facilitate communication among speakers of different backgrounds. The project will test speakers of Dutch, Spanish, Japanese, and three dialects of English to determine 1) to what extent reduction is language-specific and part of the grammar rather than random or physically-determined variability, 2) whether the sound patterns of the native language influence phonetic variability at the level of spontaneous speech in the second language, 3) how strongly dialect affects understanding of reduced speech, and 4) how degree of proficiency, years of experience, strength of ethnic/national identity, etc. affect production and understanding of reduction. The overarching theoretical question is, what is part of the learned grammar and what is low-level variability. Furthermore, the project will provide data on theoretical questions about exemplar models of speech perception, mutual effects between speakers' first and second languages, and articulatory planning.

Through globalization, immigration, and telecommunications, humans in the modern world often interact across language backgrounds. Native English speakers interact with non-native speakers and speakers of different English dialects interact with each other. The current project addresses how humans handle the variability of conversational speech in communicative situations. Detailed knowledge of natural, reduced speech, gained through this project, will impact speech technology and how humans interact with computers by voice, benefiting speech synthesis and automatic recognition of spontaneous speech. Because the project includes extensive investigation of English proficiency and language background for the experiment participants, it will also shed light on what factors make it easier or harder for non-native listeners to understand conversational speech in their second language. This project forms a synergistic international collaborative group to answer these questions.

Project Start
Project End
Budget Start
2010-09-01
Budget End
2016-02-29
Support Year
Fiscal Year
2010
Total Cost
$271,481
Indirect Cost
Name
University of Arizona
Department
Type
DUNS #
City
Tucson
State
AZ
Country
United States
Zip Code
85719