Understanding spoken language is a difficult task. Variation caused by dialects and accents make this task even more daunting. Sometimes when one meets a person with a distinctive accent, communication breaks down due to misunderstandings. Consider the word 'card'. A listener may encounter unfamiliar productions of this word that sound like other words: * 'cart' when produced by a native Polish speaker, * 'cod' when produced by a Bostonian, or * 'guard' when produced by a native Spanish speaker. The differences between the familiar pronunciation of 'card' and the variations above are minimal. But in language, minimal differences are what make two words distinct, and the variation in spoken words can cross the perceptual boundaries between words. As experience with a speaker or accent increases, there are fewer misunderstandings presumably because one learns something about the speech or the accent. This project investigates how this sort of perceptual adjustment takes place. What information do listeners focus on in order to learn a new variant of a word? Do one learn something about a specific voice, or about a general accent? By examining how native speakers of English perceive and recognize non-native speech, better understand how learning takes place can be achieved, what types of information are critical in learning, and how new information is mentally represented.

Particularly, the problem of how listeners learn to use new or unfamiliar acoustic cues to resolve potential misunderstandings is investigated, by examining how listeners respond to different words and word pairs, as illustrated above, over time. For example, in the beginning of an experiment, a subject may treat non-native productions of 'cart' and 'card' as the same word (e.g., 'cart'). But over time, and with training, the subject may alter his or her perceptual categories and begin to hear the productions as two different words -- as intended by the speaker. By manipulating whether listeners hear the words presented by a single speaker, multiple speakers from a single language, or multiple speakers from different languages throughout the experiment it can be determined whether the adjustments are particular to one voice, one language, or one learning process.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
0720054
Program Officer
Joan Maling
Project Start
Project End
Budget Start
2007-10-01
Budget End
2011-03-31
Support Year
Fiscal Year
2007
Total Cost
$349,267
Indirect Cost
Name
Stanford University
Department
Type
DUNS #
City
Palo Alto
State
CA
Country
United States
Zip Code
94304