The purpose of this work is to develop improved methods for increasing the saliency of acoustic signals for listeners with limited hearing and poor frequency selectivity. The long term goal is to develop wearable auditory devices so this population has a third alternative for hearing assistance in addition to cochlear implants and tactile aids. Previous work has shown that certain speech cues important for lipreading can be made more useful for impaired listeners when the acoustic signal is made sufficiently simple. Compared to normal speech, these simplified signals can lead to much better performance on closed-set feature identification tasks. Unfortunately, some of these listeners continue to have difficulty realizing substantial speech reception improvements over speechreading alone when evaluated with open-set connected speech materials. The applicants believe this failure is likely due to the listener's having to devote full and careful attention to the acoustic stimulus at the expense of speechreading, making combined auditory-visual reception very difficult. It is the applicants contention that by further increasing the perceptual salience of important acoustic signals, the impaired listener would be better able to integrate auditory and visual speech cues. In this work, they will study a number of approaches to further enhance the perceptual experience of acoustic signals over and above that afforded by signal amplification and simplification.