9411607 Zahorian The research proposed in this project is to investigate and optimize methods for extracting speaker-independent acoustically- based speech parameters that signal the phonetic content of speech and to determine transformations which can display these features in an articulation training aid for the hearing impaired. The proposed project is a continuation and extension of research previously funded by the NSF. A major component of the effort will be to coordinate the collection of a large data base of speech samples from both normally-hearing and hearing- impaired listeners, which will be required not only for the proposed work, but which will also facilitate other research efforts in the field. Extensive experiments will be conducted to optimize the conversion of speech parameters to display parameters so that phonemes and distinctive features of phonemes, produced either in isolation or syllabic contexts, can readily be discriminated and identified based on their display characteristics. A combination of nonlinear/linear transform will convert acoustic measurements of auditory stimuli to a visual display representation. Acoustic features will be extracted from global spectral shape, fundamental frequency, and short-time energy. The feature extraction/classification process will be individually optimized for every pair of phones in English. ***