Coordinated actions of the many components of the vocal tract during speech produce a complex acoustic signal. Because the main signal is sound, it is often assumed that we can find the important aspects of that sound, without regard to the speech gestures that gave rise to it. Some acoustic signatures, however, were only discovered (on the previous cycle of this grant) by making predictions from the way the tongue moves. Although previous researchers had claimed that perceptual recovery of such gestural information is impossible, it turns out that, for the regions useful for speech, there are enough constraints to make the computation solvable. The current research will extend those results from vowels to the more critical consonants, and to show that listeners make use of the signatures of the articulation. Two main classes of theories, gestural and acoustic, differ in their treatment of how this acoustic evidence is learned. Acoustic theories attribute it to learning during babbling, while gestural theories assert that the constraints of the vocal tract are sufficient. The gestural hypothesis that listeners make use of all aspects of a gesture predicts that even unfamiliar information will be used, while the acoustic theory leads us to expect that prior experience is needed. We have found that unusual gestural correlates, such as a puff of air, are used perceptually as well, despite not being learned.
A second aim of the research is to expand those findings to even more unusual sources of information (e.g., visual evidence of a flickering of a candle near the speaker's mouth). These air puffs, called aspiration, are not used by all languages, however, and we will test whether active, linguistic use of aspiration is necessary for using these gestural cues. These results will shape our understanding of the fundamental organization of speech and its learning. Learning a second language, whether it is English or one of the worlds's many other languages, is often hampered by difficulty with the new sounds the other language uses. This project has as a third aim to apply the results of the basic studies addressing its first two aims to exploration of new ways of training language learners in producing novel sounds. To the extent that speech perception is tightly linked to production, then providing feedback on production of the sounds that are imperfectly learned should increase success. Here, the feedback will be provided by ultrasound images of the tongue during difficult sounds. An example for those learning English is the mastery of the /l/ and /r/ sounds. For English speakers learning another language, an example is the trilled /r/ of Spanish. The studies proposed here are expected to provide new ways of improving second language learning.

Public Health Relevance

The project addresses ways in which the acoustic speech signal can be used by listeners to extract the underlying linguistically significant movements of the vocal tract. The research will show which acoustic information is important, that perceivers also use non-acoustic information, and that use of speech production feedback improves second language learning.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Haskins Laboratories, Inc.
New Haven
United States
Zip Code
Stevenson, A J T; Chiu, C; Maslovat, D et al. (2014) Cortical involvement in the StartReact effect. Neuroscience 269:21-34
Noiray, Aude; Iskarous, Khalil; Whalen, D H (2014) Variability in English vowels is comparable in articulation and acoustics. Lab Phonol 5:271-288
Krivokapi?, Jelena (2014) Gestural coordination at prosodic boundaries and its role for prosodic structure and speech planning processes. Philos Trans R Soc Lond B Biol Sci 369:20130397
Katsika, Argyro; Krivokapi?, Jelena; Mooshammer, Christine et al. (2014) The coordination of boundary tones and its interaction with prominence. J Phon 44:62-82
Lammert, Adam; Goldstein, Louis; Narayanan, Shrikanth et al. (2013) Statistical Methods for Estimation of Direct and Differential Kinematics of the Vocal Tract. Speech Commun 55:147-161
Noiray, Aude; Menard, Lucie; Iskarous, Khalil (2013) The development of motor synergies in children: ultrasound and acoustic measurements. J Acoust Soc Am 133:444-52
Iskarous, Khalil; Mooshammer, Christine; Hoole, Phil et al. (2013) The coarticulation/invariance scale: mutual information as a measure of coarticulation resistance, motor synergy, and articulatory invariance. J Acoust Soc Am 134:1271-82
Noiray, Aude; Cathiard, Marie-Agnes; Menard, Lucie et al. (2011) Test of the movement expansion model: anticipatory vowel lip protrusion and constriction in French and English speakers. J Acoust Soc Am 129:340-9
Fowler, Carol A; Thompson, Jaqueline M (2010) Listeners' perception of compensatory shortening. Atten Percept Psychophys 72:481-91
Iskarous, Khalil; Kavitskaya, Darya (2010) The Interaction between Contrast, Prosody, and Coarticulation in Structuring Phonetic Variability. J Phon 38:625-639

Showing the most recent 10 out of 35 publications