We propose to undertake research leading to the development of an assistive listening device for the hearing impaired. This device would monitor the acoustic environment in order to detect and separate individual sounds from background noise and from each other. It would identify familiar sounds. It would describe unfamiliar sounds in terms of their similarity to known sounds, and in qualitative terms related to perceptual notions such as loudness, duration, pitch, and abruptness. Phase I has two goals. The first is to determine the feasibility of a novel technique for locating acoustic events in a time-frequency-amplitude continuum. The technique classifies small regions of this representation into four fundamental classes, and merges regions into acoustic events using rules derived solely from physical acoustics. The second goal is to verify the utility of a perceptually-based representation for the identification of sounds. The effectiveness of the proposed representation will be compared to that of a two-dimensional cepstral representation, which has been shown to be useful for acoustic pattern processing. If successful, the resulting technology would permit the development of more powerful assistive listening devices than are currently available, and of acoustic monitors for other medical and workplace applications.