This grant will support research in several techniques for encoding acoustic-phonetic information to be used in speaker- independent continuous-speech recognition. Time-delay and recursive neural network architectures will be used for spectral analysis of overlapping speech segments, to be compared with a Bayesian maximum-likelihood classification of the speech spectral dynamics. Neural-network outputs will then be encoded as approximate phonetic distances, with a two-level iterative training method to optimize overall recognition. Integration of this system with hidden-Markov speech recognition will also be studied.