Coordinated actions of the many components of the vocal tract during speech produce a complex acoustic signal. Because the main signal is sound, it is often assumed that we can find the important aspects of that sound, without regard to the speech gestures that gave rise to it. Some acoustic signatures, however, were only discovered (on the previous cycle of this grant) by making predictions from the way the tongue moves. Although previous researchers had claimed that perceptual recovery of such gestural information is impossible, it turns out that, for the regions useful for speech, there are enough constraints to make the computation solvable. The current research will extend those results from vowels to the more critical consonants, and to show that listeners make use of the signatures of the articulation. Two main classes of theories, gestural and acoustic, differ in their treatment of how this acoustic evidence is learned. Acoustic theories attribute it to learning during babbling, while gestural theories assert that the constraints of the vocal tract are sufficient. The gestural hypothesis that listeners make use of all aspects of a gesture predicts that even unfamiliar information will be used, while the acoustic theory leads us to expect that prior experience is needed. We have found that unusual gestural correlates, such as a puff of air, are used perceptually as well, despite not being learned.
A second aim of the research is to expand those findings to even more unusual sources of information (e.g., visual evidence of a flickering of a candle near the speaker's mouth). These air puffs, called aspiration, are not used by all languages, however, and we will test whether active, linguistic use of aspiration is necessary for using these gestural cues. These results will shape our understanding of the fundamental organization of speech and its learning. Learning a second language, whether it is English or one of the worlds's many other languages, is often hampered by difficulty with the new sounds the other language uses. This project has as a third aim to apply the results of the basic studies addressing its first two aims to exploration of new ways of training language learners in producing novel sounds. To the extent that speech perception is tightly linked to production, then providing feedback on production of the sounds that are imperfectly learned should increase success. Here, the feedback will be provided by ultrasound images of the tongue during difficult sounds. An example for those learning English is the mastery of the /l/ and /r/ sounds. For English speakers learning another language, an example is the trilled /r/ of Spanish. The studies proposed here are expected to provide new ways of improving second language learning.
The project addresses ways in which the acoustic speech signal can be used by listeners to extract the underlying linguistically significant movements of the vocal tract. The research will show which acoustic information is important, that perceivers also use non-acoustic information, and that use of speech production feedback improves second language learning.
Showing the most recent 10 out of 72 publications