One of the core goals in developmental science is to understand how children make sense of their highly variable sensory input. For instance, how does a child know that a cup viewed from the top and a cup viewed from the side--two very different visual images--are the same object? How does a child know that her mother saying "cat" and her sibling saying "cat"--two very different-sounding versions--are the same word? Adults do these things with trivial ease. However, the child has to figure out what sound patterns matter: which sounds should be linked to a representation of furry animals, and which should be linked to the person talking. Moreover, because spoken language happens rapidly, one word after another, the child must compute all of this information very quickly.

Surprisingly little is known about the learning mechanisms that sort out the various sound patterns in spoken language. Children in the first year of life rapidly learn to tune out sound patterns that are not present in their native language, such as the difference between French nasalized and non-nasalized vowels. However, it is not known how children process sound patterns that are not directly related to meaning. This goal of this research is to understand how young language learners process talker-related sound variability--sound differences that do not change the meaning of a word, but vary with the vocal, social, and emotional characteristics of the person speaking. The research explores how children deal with talker variability: how it influences their learning of new vocabulary; what allows them to tune it out when recognizing words, but pay attention when recognizing talkers; how well they recognize voices and properties of voices such as gender and accent; and how this changes over development.

This research has the potential to transform the way researchers think about language acquisition. Is it a process of tuning out much of the sound variability present, or is it instead a process of accumulating finely-detailed acoustic knowledge? More broadly, the knowledge gleaned will help to improve learning of new sound patterns, such as words in second languages and speech in unfamiliar accents. By more fully exploring normal language development, it will contribute to the picture of what is missing or disrupted in child language deficits. It may suggest improvements to automatic voice recognition systems, which currently do not cope well with the natural acoustic variability among talkers. Finally, the research will help uncover how listeners learn to make inferences about people based on the way they talk. This award will support a variety of students (graduate, undergraduate, high school) interested in scientific fields, contributing to the future science and technology workforce. It will also enable the investigator to share experimental materials with other researchers, streamlining the research process and quickening the pace of scientific advancement.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
1057080
Program Officer
Peter Vishton
Project Start
Project End
Budget Start
2011-08-15
Budget End
2018-07-31
Support Year
Fiscal Year
2010
Total Cost
$410,000
Indirect Cost
Name
University of California San Diego
Department
Type
DUNS #
City
La Jolla
State
CA
Country
United States
Zip Code
92093