Learning a new language involves developing the abilities to hear and produce new sounds and sound sequences. This project will use the methodology of linguistics, psychology, and neurophysiology to determine why some foreign language structures are mastered more easily than others. The goals are to determine whether language learners? problems in pronunciation originate from errors of perception, and at what level of processing the perception errors occur. Two types of methodology will be employed: discrimination tasks, in which listeners judge two stimuli as the same or different; and ERP (event related potentials), which measure automatic responses of the auditory system to changes in auditory stimuli. This combination of methods will help to determine whether listeners actually fail to hear the acoustic differences that are crucial in distinguishing words in the foreign language but not in their native language, or do hear the differences but simply fail to categorize them correctly.

This will be one of the first studies to investigate the relative perceptibility of foreign language sequences using both behavioral and neurophysiological probes. The findings will inform foreign language instruction, contributing to a better understanding of why some foreign structures are more difficult to master than others, why particular patterns of mispronunciation occur, and whether training in perception of foreign languages facilitates production. The findings will also have important implications for hypotheses concerning the plasticity of the neural structures responsible for language processing.

Project Report

Languages systematically use sound differences to signal differences in meaning; in English, for example, the pairs see/she and teak/cheek are distinguished by differences in the initial consonants of these words. Because the set of possible sounds and sound combinations differs from language to language (for example, Japanese words never contain sound combinations like see or tea), learning a new language involves learning both to pronounce and to perceive the sounds of that language.The relationship between learning to perceive and learning to produce a language has ever been clear–do these processes proceed in tandem, or does one precede the other? Does training in perception aid production, and vice versa? To what extent are learners able to move beyond the confines of their native language system to achieve native-like processing and production of a foreign language, and is there a critical age by which learners should be exposed to the foreign language? These questions are of interest both to foreign language instructors and to cognitive scientists and neuroscientists who study the extent to which the human brain is biased by past experience to perceive the world in a particular way. This study investigated the relationship between the production and perception of English by native speakers of Japanese, Korean, and Mandarin, using various techniques: (i) acoustic analysis of learners’ pronunciations of the foreign language; (ii) behavioral studies of learners’ ability to categorize and to discriminate sound differences that are meaningful in the foreign language but not in the native language; and (iii) electroencephalographic (EEG) investigation of learners’ involuntary neurophysiological responses to sound differences in the native and foreign languages (using event-related potentials, or ERP). We investigated specific types of errors in the production of specific English structures, with the goal of determining the sources of these errors and what they reveal about the ability of the brain to respond to new types of information.Our major findings were as follows: 1) Certain foreign structures (e.g., English see) were more likely to be mispronounced than other foreign structures (e.g., English tee), even when both structures were equally new to the learner. In these cases, we found that the degree of difficulty, as reflected in accuracy of pronunciation, correlated with the perceptibility of the structure. Furthermore, this asymmetry in perceptual accuracy showed up not only for learners of English, but also for native speakers of English. These results support the existence of a universal, language-independent scale of perceptual difficulty that can determine the rate of mastery of foreign structures, a finding that has implications for designing curricula for foreign language instruction. 2) While some pronunciation errors clearly reflected the effect of the native language, others did not. For example, in Korean, when sequences like [km] come together, the [k] is pronounced as a nasal sound, and the same process is seen in Korean pronunciations of foreign words like Pacman as Pa[ngm]an. However, we found that Korean-speaking learners of English exhibit a variety of different error patterns, such as inserting a vowel between the two consonants or changing other features of the consonants. We found that some of these errors correlated with an inability to distinguish the foreign structure from the mispronounced structure, and were therefore clearly based in perception. Other errors could be traced to differences in thearticulatory timing patterns of the foreign language and the native language; for example, while English speakers overlap the end of the [k] sound in Pacman with the beginning of the following [m], Korean speakers who are not able to move quickly enough from the [k] to the [m] produce an intermediate vowel-like sound between the consonants. An understanding of the sources of errors made by language learners is crucial for effective foreign language instruction, as well as for understanding the effect of the first language on subsequent language learning in both perception and production. 3) Sound differences that play a role in distinguishing words in the native language elicited automatic early neurophysiological responses, unlike sound differences that were important only in the foreign language. These results are consistent with the hypothesis that early experience with a native language leads to a "neural commitment" that allows the brain to quickly and efficiently distinguish just those sound differences that are important for that language. These findings are relevant to the expanding study of the ways in which brain structure is shaped by experience.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
0746027
Program Officer
Joan Maling
Project Start
Project End
Budget Start
2008-05-15
Budget End
2012-12-31
Support Year
Fiscal Year
2007
Total Cost
$420,804
Indirect Cost
Name
State University New York Stony Brook
Department
Type
DUNS #
City
Stony Brook
State
NY
Country
United States
Zip Code
11794