Adults treat linguistically relevant and not relevant contrasts between speech sounds differently. For example, in English, the contrast between the vowel sounds of "pit" and "peat" is relevant (phonemic), since "pit" and "peat" differ in meaning. Variation in the vowels of "can't" and "cat" is not relevant (allophonic): exchanging the "a" sounds does not affect meaning, and speakers produce one or the other depending on the sound context (e.g., the "a" in "can't" before "n"; and the "a" from "cat" before "t"). Whether a contrast is allophonic must be learned since they vary across languages. For example, in French the reverse is true: the contrast between the "a" sounds signals differences in meaning (as in "bas" [low] and "banc" [bench]), whereas the contrast between "pit-peat" does not and depends on the vowel context. Infants hear both types of contrasts; yet, by the end of their first year, they have learned to ignore allophonic contrasts. How do they learn to attend to phonemic and ignore allophonic contrasts? Computational models learn to differentiate phonemic from allophonic contrasts only when explicitly given measures of sound similarity. Therefore, the hypothesis driving the current research is that contrasts produced phonemically (e.g., "a"s by French speakers) and allophonically (e.g., "a"s by English speakers) differ in their similarity, such that allophonic sounds in a contrast are more similar to each other, and that young infants are sensitive to these differences. This hypothesis is tested using two approaches. First, a corpus of speech sounds will be created by recording adult speakers of English and French producing two contrasts (similar to "pit-peat" and "can't-cat" differences) when speaking to both infants and adults. Acoustic measurements will determine which differences, if any, indicate whether sounds are from phonemic or allophonic categories for the speaker. Second, behavioral experiments will test whether infants and adults use these available acoustic differences and distributions when learning sound patterns. Participants will be exposed to novel speech sound patterns then tested on additional items that either follow or violate the earlier patterns. Learning of the novel patterns will be inferred from the differentiation of test items that follow and violate the patterns (indicated by listening time differences in infants and rating differences in adults). If larger differences in acoustics and/or distribution of speech sounds induce participants to treat the sounds as part of a phonemic contrast while smaller differences induce them to treat the sounds as allophonic variants, the hypothesis will be supported.

The research will provide insight into the available information and fundamental mechanisms used to learn a language's sound system (phonology); extend knowledge about how infants learn categories; and increase our understanding of the role of infant-directed speech in language acquisition. Discovering that infants differ in their ability to learn categories from infant- and adult-directed speech may influence both parenting techniques and early intervention strategies. The project contributes to the growing body of work clarifying relationships between early speech perception and later language acquisition, which may eventually enable us to use perception tasks in infancy for diagnosis and early intervention (e.g., sensitivity to native-language contrasts predicts later word comprehension, word production, and reading skill). Finally, the award not only enables research on phonological acquisition, but, as students play a key role in all phases of the project, also educates and motivates students and future researchers in linguistic, behavioral, and experimental approaches to the understanding of language development.

Project Start
Project End
Budget Start
2009-09-01
Budget End
2014-02-28
Support Year
Fiscal Year
2008
Total Cost
$342,348
Indirect Cost
Name
Purdue University
Department
Type
DUNS #
City
West Lafayette
State
IN
Country
United States
Zip Code
47907