This application has violated system integrity and will be terminated. Quit all applications, quit Windows, and then restart your computer. This application has violated system integrity due to execution of a privileged instruct. This research is aimed at developing highly effective adaptation mechanisms for speaker-independent continuous speech recognition so as to enhance its robustness under a wide range of speaker and environment conditions. The adaptation is based on the modeling of speech spectral variation smurces, and is via sequential and iterative unsupervised learning of speech model parameters from on-line speech data. The adaptation further handles interference sound signals including `cocktail party` speech via two-channel speech/sound acquisition, and via estimation of the cross-talk channel characteristics. The speech modeling and adaptation are within the statistical framework of the hidden Markov models and other statistical algorithms. The convergence condition and fast implementation of the adaptation algorithms are addressed. Adaptive speaker-independent, continuous speech recognition systems are to be studied in human-computer interaction contexts such as virtual renvironments and interactive three-dimensional visual computing at the Beckman Institute of the University of Illinois. The funding will be used to support two PhD students. This research is expected to contribute new knowledge to robust speech modeling in the presence of multiple speech variation sources and interference sounds, to significantly advance the robustness of speaker- independent, continuous speech recognition systems, and to facilitate a much wider scope of applications for speaker-independent, continuous speech recognition systemsin human-computer interaction.