9407050 Auslander There are many ascpects of automated speech understanding, but the overall problem is often divided into two main tasks, The first is the transcription of spoken language into text and the second is the derivation of meaning from the transcribed speech. These tasks are traditionally addressed separately, assuming a clean interface at the level of transcription. However, it is clear that the acoustic and semantic problems must be intimately related. It is also felt that traditional tools of acoustic analysis of speech signals, based on the Fourier spectrum, are insufficient for the robust recognition of fluent speech. The substance of our project will be to study time-frequency feature extraction techniques for the purpose of clarifying the acoustic-semantic relationship, with the goal of improving the performance of systems for spoken language understanding. Specifically, we purpose the integration of time-frequency techniques being developed at CUNY with existing acoustic and semantic processing already in place at AT&T Bell Laboratories. A primary issue will be to understand the relationship of the extracted time-frequency objects to traditional phonetic subword units, with emphasis on providing improved acoustic discrimination in regions of semantic ambiguity. While primary features extracted by the auditory periphery are innate in humans, the perceptual metric, subword and word symbols are acquired knowledge. We want to understand how time-frequency feature extraction techniques might simulate this behavior in a machine, for the purpose of improving the robustness of spoken language processing.