When adults listen to speech, their perception is influenced by the phonetic system of their native language. For example, Japanese speakers find it very difficult to hear the difference between English [r] and [l], while English speakers have trouble discriminating Hindi [d] and [D]. In contrast, infants who have not yet acquired the phonetic system of their native language often surpass adults in their ability to discriminate foreign contrasts. By 12 months, infants lose sensitivity to most foreign contrasts and show the adult-like pattern of discrimination. What is responsible for this change over the first year of life? With support from the National Science Foundation, Drs. Jessica Maye and Daniel Weiss will follow up on their research that has demonstrated that infants' sensitivity to statistical regularities in the speech they hear may lead to decreased sensitivity to foreign language contrasts as well as enhanced sensitivity to native language contrasts. They will measure the degree to which different distributional patterns in speech affect discrimination by human infants and adults as well as cotton-top tamarin monkeys. Previous research has found that tamarins, like humans, are capable of learning linguistic regularities through statistical properties of human speech. However, there are subtle differences across species in the nature of what is learned. This project will provide the first test of statistical phonetic learning in another species, with an aim to find both similarities and differences between humans and nonhumans, as well as between infants and adults. Furthermore, these experiments will involve the use of monkey call stimuli as well as human speech, in order to compare the species on their own vocalizations versus that of another species.
Our ability to acquire language is one of the fundamental traits distinguishing humans from other animals. For hundreds of years, scholars have debated the extent to which the mechanisms underlying language acquisition are unique to humans, as it is clear that only humans ultimately acquire the full complexity of human language. One such learning mechanism is the ability to draw linguistic inferences on the basis of statistical patterns in the language that we hear. This is an ability that is not exclusive to humans; however, there appear to be differences in the units over which humans and nonhumans compute such regularities. Drs. Maye and Weiss hypothesize that these differences may underlie what makes humans such adept learners of human language. Similarly, there may be subtle differences in the statistical computations performed by infant and adult learners which would account for the fact that language acquisition that begins in infancy reaches a level of proficiency far surpassing that of languages learned later in life. This project will elucidate the similarities and differences between these three populations in their ability to learn statistical regularities in both speech stimuli and tamarin calls, in order to answer the question of what makes humans, and infants in particular, so remarkably adept at decoding an unknown language. On a practical level, the project will contribute to the infrastructure of science by training students in the interdisciplinary research of cognitive science (including linguistics, developmental psychology, and comparative psychology). Furthermore, a better understanding of what makes infants such successful language learners has far reaching clinical and technological applications. For example, a comparison of the learning mechanism of good learners (e.g. infants) vs. poor learners (e.g. adults, primates) may well lead to new therapies for the language disabled. Understanding how human listeners cope with enormous amounts of acoustic variation to uncover the message of a spoken utterance will help to improve speech recognition systems, which currently perform far below the level of human listeners.