The goal is to develop a model to explain how listeners perceive speech in the face of extreme context-sensitivity resulting from co-articulation. The ability of listeners to recover speech information, despite dramatic articulatory and acoustic assimilation between adjacent speech sounds, is remarkable and central to understanding speech perception. Recent results from this laboratory suggest that it may be possible to define and model quite general processes of auditory perception and learning which may provide a significant part of the explanation for findings demonstrating a correspondence between speech perception and production with perseverative co-circulation. To the extent do general auditory processes serve to accommodate acoustic consequences of co- articulation? More generally, does perceptual (spectral) contrast serve in part or whole to undo the assimilative nature of co-articulation and its acoustic consequences? More specifically, do general psychoacoustic enhancement effects, presumably due to relatively peripheral simple neural mechanisms, serve to provide spectral contrast for complex sounds such as speech? In addition to such basic auditory effects, what is the relative contribution of experience (learning) with co-articulated speech? What are the prospects for relatively elegant algorithms for signal processing that provide a desirable adjunct to amplification for hearing- impaired listeners? Toward these ends, studies with non-human animals provide a model of auditory processing unadulterated by experience with speech as well as a model to investigate the role of experience with co- variation among acoustic attributes consequent to co-articulated production. Here, animals permit precise control of experience with speech in establishing whether learning plays a significant role in accommodation of acoustic consequences of co-articulation while defining the relative contributions of experience and general auditory processes.
Showing the most recent 10 out of 19 publications