The goal of this proposed work is to understand the neural mechanisms used to process stimulus components that are likely to arise from a single source, despite their distinct separation in frequency or time. The normal auditory system is remarkably good at grouping such components appropriately, but listeners with hearing impairment have considerable difficulty in situations that require the perceptual segregation of multiple competing sound sources. Psychophysical paradigms that highlight interactions between temporally or spectrally distant elements will be examined physiologically with single-unit extracellular recordings in the inferior colliculus (1C) of the awake marmoset. To investigate influences across time, the dependence of the neural response to a short probe on the history of preceding stimulation (forward masking) will be quantified and related to psychoacoustic studies. To study interactions across frequency, responses to sounds made up of widely spaced components that have been shown to disrupt listeners' abilities to analyze dynamic changes in a target channel will be recorded. The corresponding perceptual effect is known as modulation detection interference (MDI), and it cannot be explained by peripheral auditory responses. The temporal and spectral context-dependent representation of speech signals will also be studied with stimuli matching those used in behavioral paradigms. These experiments will help establish the role of the 1C in the apparent transition from a peripheral code that exhibits weak temporal and spectral context dependence (with respect to perceptual measures) to a higher-order (cortical) representation that is heavily dependent on sound features removed in time or frequency from a target signal component. Potentially confounding factors, such as anesthesia and species differences, are minimized in the proposed preparation by using an awake primate that uses vocalizations to communicate.

Public Health Relevance

. Perhaps the most challenging environment for listeners with hearing impairment is one with many competing dynamic sound sources: the mixture tends to fuse into a single stream, making selective attention to one or more specific sources nearly impossible. To understand how the normal auditory system deals with this situation so effortlessly, single-cell responses in the auditory midbrain will be recorded while sounds are played that emphasize some of the acoustic features apparently used by the system to group (and potentially segregate) components generated by a single source. Results should suggest mechanisms and strategies used by the brain to process these sounds, potentially providing clues for improving artificial algorithms used for signal detection, scene analysis, and efficient representation of information. ? ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
1F32DC009164-01A1
Application #
7407160
Study Section
Communication Disorders Review Committee (CDRC)
Program Officer
Cyr, Janet
Project Start
2007-09-01
Project End
2009-08-31
Budget Start
2007-09-01
Budget End
2008-08-31
Support Year
1
Fiscal Year
2007
Total Cost
$46,826
Indirect Cost
Name
Johns Hopkins University
Department
Biomedical Engineering
Type
Schools of Medicine
DUNS #
001910777
City
Baltimore
State
MD
Country
United States
Zip Code
21218