This research is directed toward improving speech reception for the severely hearing impaired who rely on speechreading for communication. Attempts to advance basic understanding involve study of speechreading and the effects of auditory and visual supplements on audiovisual speech reception. Attempts to develop supplements for speechreaders involve extracting effective cues from acoustic speech and displaying them to individuals with severe hearing impairments. Attempts to develop mathematical models of audiovisual integration focus on quantifying low well supplementary signals are integrated with speechreading in the reception of speech segments, suprasegmental characteristics, and sentences. Model predictions are compared to measured speech reception for a variety of listeners and presentation conditions, including degraded auditory and visual reception, to measure the efficiency of audiovisual integration. Attempts to develop signal processing techniques to derive speechreading supplements from acoustic speech will be concerned with simplified signals that can be readily matched to residual hearing. The signals must be accurately derived from speech degraded by interference and reverberation. Attempts to determine the factors that limit the effectiveness of such signals will provide foundation for realistic evaluations of promising simplified signals as speechreading supplements. Attempts to apply automatic speech recognition to the development of speechreading supplements involve the study of Manual Cued Speech, determination of the extent to which modern speech recognition algorithms can produce speechreading cues automatically, and study of visual displays for presenting speechreading supplements based on the output of automatic recognition systems to impaired listeners.