Natural sounds are complex high-dimensional signals. The "cocktail-party phenomenon" in the hearing sciences is a classic example of the brain's ability to parse out and attend to a particular dynamic signal (speech) in the presence of multiple extraneous (non-signal) sounds, all of which reach the ears concurrently and continuously vary in pitch and location as a function of time. The ability to identify a relevant signal out of this "acoustic mixture" far outdistances that of the most sophisticated current automated speech-recognition systems. To understand how people process complex sounds that dynamically vary in time, space, and frequency, scientists must determine how the brain organizes these auditory dimensions. With support from the National Science Foundation, Dr. Saberi and his colleagues will use neuroimaging techniques to systematically map the neural landscape that underlies the functional organization of brain regions responsive to temporal, spatial, and spectral aspects of complex sounds.
The broader impacts of this project include applications to automated speech-recognition systems, development of auditory navigation systems for the blind, improved signal-processing by auditory prostheses for the hard-of-hearing and cochlear-implant users, and a deeper understanding of cortically-based auditory deficits in humans. The project integrates research and education by providing opportunities for graduate and undergraduate students to engage in research, and by complementing the planned development of an undergraduate and Ph.D. program in cognitive neuroscience at UCI and an interdisciplinary Center for Cognitive Neuroscience.