Our knowledge of the sound patterns in a language plays a key role in determining our perception of incoming speech. An important question for an accurate model of speech perception is how the auditory system and our knowledge of sound patterns interact. Do we first use our auditory system to perceive speech just like any other type of sound, and then later bring our knowledge of sound patterns found in the language to bear on it? Or do we apply our knowledge of the sound patterns immediately? According to the first view, the working of the auditory system itself cannot be influenced by our knowledge of linguistic patterns, but on the second view, it can.

Under the direction of Dr. John Kingston, Mr. Michael Key will conduct identification and discrimination studies, as well as a priming experiment that measures certain potentials of the brain's electrical response, in order to investigate these questions. This dissertation research will assess whether the behavior associated with knowledge of a sound pattern corresponds to the activity of relatively late electrical activity in the brain, or whether it is also reflected in relatively early brain activity. Converging evidence from these various techniques bear directly on questions regarding the architecture of speech perception -- specifically whether linguistic knowledge is applied separately from general hearing or simultaneously with it. Several of the studies concern sound patterns in other languages and the role of a listener's linguistic experience, and will therefore be carried out at universities in France, Germany and the U.K. This project will help to establish new international partnerships for speech perception research comparing different language backgrounds. In addition, undergraduate students will be trained in the methods of speech perception research.

Project Report

Our knowledge of the sound patterns in a language plays a key role in determining our perception of incoming speech. An important question for an accurate model of speech perception is how the auditory system and our knowledge of sound patterns interact. Do we first use our auditory system to perceive speech just like any other type of sound, and then later bring our knowledge of sound patterns found in the language to bear on it? Or do we apply our knowledge of the sound patterns immediately? According to the first view, the working of the auditory system itself cannot be influenced by our knowledge of linguistic patterns, but on the second view, it can. The results of one series of experiments are interpreted as evidence that phonological processing is autonomous from (rather than interactive with) auditory processing. A second important question concerns the ways in which knowledge of the sound patterns and constraints of our native language acts as a filter on our perception of speech sounds. The experiments in this project yield evidence that: (1) one kind of sound pattern (assimilation processes) can cause listeners to confuse sound sequences which have undergone the process with those that have not, particularly when discrimination is based on categories. (2) another kind of sound pattern (allophonic variation) can prompt listeners to use the presence of one sound to anticipate the presence of the sound that legally co-occurs with it. (3) restrictions on the presence of "r" in non-rhotic English dialects (e.g. Eastern New England, England, Australia) can cause listeners to perceive "r" when it is absent or fail to perceive "r" when it is present.

Project Start
Project End
Budget Start
2010-04-15
Budget End
2012-03-31
Support Year
Fiscal Year
2009
Total Cost
$9,450
Indirect Cost
Name
University of Massachusetts Amherst
Department
Type
DUNS #
City
Amherst
State
MA
Country
United States
Zip Code
01003