Hearing engages, in a seemingly effortless way, complex processes and transformations collectively known as auditory scene analysis, through which the auditory system consolidates acoustic information from the environment into perceptual and cognitive experiences. The project proposed here explores a fundamental perceptual component of auditory scene analysis called auditory stream segregation. This phenomenon manifests itself in the ability of humans and animals to attend to one of many competing acoustic streams even in extreme noisy and reverberant environments - also known in the literature as the """"""""Cocktail Party Problem"""""""". While completely intuitive and omnipresent in humans, mammals, birds, and fish, this remarkable perceptual ability remains shrouded in mystery. It has been rarely quantified in objective psychoacoustical tests or investigated in non-human species, and seldom explored in physiological experiments in humans or animals. Consequently, the few attempts at developing computational models of auditory stream segregation remain highly speculative, and lack the perceptual and physiological data to support their formulations. This in turn has considerably hindered the development of such capabilities in engineering systems such as in automatic speech recognition or the detection and tracking of target sounds in sensor networks. ? The proposed research seeks to develop a computational model of auditory scene analysis that accounts for perceptual and neuronal-findings of auditory stream segregation. The intellectual merit of this work is providing a rigorous framework for the design of new psychoacoustic and physiological experiments of streaming, and for developing effective algorithmic implementations to tackle the """"""""cocktail party problem"""""""" in engineering applications. The proposed research project draws upon the expertise of neurobiologists, psychoacousticians, and engineers in integrating psychoacoustic, physiological and computational techniques. The broader impact of this effort is in providing versatile and tractable models of auditory stream segregation, and hence significantly facilitating the integration of such capabilities in engineering systems, such as in automatic speech recognition or the detection and tracking of target sounds in sensor networks. This project will also provide a rigorous foundation for the design and generation of new hypotheses in order to better understand the neural basis of active listening ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Aging (NIA)
Type
Research Project (R01)
Project #
5R01AG027573-03
Application #
7269263
Study Section
Special Emphasis Panel (ZRG1-IFCN-B (50))
Program Officer
Chen, Wen G
Project Start
2005-09-01
Project End
2009-07-31
Budget Start
2007-08-01
Budget End
2009-07-31
Support Year
3
Fiscal Year
2007
Total Cost
$213,117
Indirect Cost
Name
University of Maryland College Park
Department
Engineering (All Types)
Type
Schools of Engineering
DUNS #
790934285
City
College Park
State
MD
Country
United States
Zip Code
20742
Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder et al. (2013) Segregation of complex acoustic scenes based on temporal coherence. Elife 2:e00699
Elhilali, Mounya; Xiang, Juanjuan; Shamma, Shihab A et al. (2009) Interaction between attention and bottom-up saliency mediates the representation of foreground and background in an auditory scene. PLoS Biol 7:e1000129
Elhilali, Mounya; Ma, Ling; Micheyl, Christophe et al. (2009) Temporal coherence in the perceptual organization and cortical representation of auditory scenes. Neuron 61:317-29
Elhilali, Mounya; Shamma, Shihab A (2008) A cocktail party with a cortical twist: how cortical mechanisms contribute to sound segregation. J Acoust Soc Am 124:3751-71