In natural situations, the sound environment is dynamically and rapidly changing, with multiple sources overlapping in time and competing for attention. The ability to listen to your friend talking while walking down a noisy city street requires brain mechanisms that disentangle the sound mixture, separating your friend's voice from the sounds of the cars and other passing conversations. Auditory scene analysis, the ability to parse and organize the mixture of sound input, is a fundamental auditory process. Yet, the neural mechanisms that subserve perceptual sound organization are still poorly understood, and often rely on theories developed primarily to describe mechanisms of the visual system. Deficits in the ability to select relevant information when there are multiple competing sound sources is a common complaint in aging and in individuals with hearing loss, and can greatly hinder communication ability. There is no computer algorithm or prosthetic device that can mimic what the brain does when there is competing background noise. The overall goal of this proposal is to characterize how the auditory system adapts to dynamically changing multi-stream environments, allowing rapid and flexible shifting to different sound events in one's surroundings.
Specific Aim 1 determines how ambiguous input is physiologically stored.
Specific Aim 2 characterizes how attention modifies neural activity to support behavior in ambiguous situations.
Specific Aim 3 identifies how neural representations of auditory input accommodate to changing multi-stream environments.
The aims will be accomplished by obtaining behavioral and multiple electrophysiological indices of sound organization when the input is perceptually ambiguous, thus providing a novel model in normal hearing adults for characterizing how the brain maintains stable sound events in noisy environments. A key strength of the current project is the ability to neurophysiologically assess sound organization in auditory cortex for both attended and unattended sounds. The results of the proposed experiments will elucidate how dynamically changing environments are maintained by brain systems;characterizing how automatic and attentive mechanisms of scene analysis interact in the perception of one among many streams. This will fill a profound gap in our understanding of the neural mechanisms contributing to the perception of stable auditory events in complex and dynamically changing sound environments.
Impairments to peripheral or central auditory mechanisms severely impact the ability to identify distinct sound events or select relevant sound streams, especially in situations with multiple, competing sound sources. Results of this proposal will significantly advance our understanding of auditory perception in more natural listening situations based on models of competing attention, and provide a basis for developing new hypotheses to assess disordered listening that is central to numerous developmental disabilities, hearing loss, schizophrenia, and aging. Understanding how the brain segregates and integrates sounds in the complex environment holds important implications for the development of medical technologies (such as for designing hearing aids, prosthetic devices, and computer models of speech perception that deal with competing background sounds).
Showing the most recent 10 out of 57 publications