This proposal advances our understanding of how the brain separates concurrent sounds such as multiple voices. Concurrent-sound segregation is a basic perceptual ability, and is necessary for successful speech perception in complex acoustic environments. Deficits in this ability are major complaints in hearing-impaired (HI) listeners. Neural mechanisms underlying concurrent-sound segregation are poorly understood. Characterizing these mechanisms is a prerequisite for more effectively alleviating listening difficulties in the HI. One of the most powerful cues for the segregation of concurrent harmonic sounds is a difference in their fundamental frequencies (F0s). The F0s of concurrent sounds can theoretically be extracted using 'spectral'or 'temporal'pattern-matching mechanisms, which operate on neural representations of harmonic structure (rate- place profiles) or F0-related periodicities (periodicity-place profiles), respectively. While studies have begun examining these profiles at peripheral and subcortical levels, it is unknown whether neural representations of simultaneous harmonic sounds at the level of auditory cortex (AC) contain sufficient information to enable their perceptual segregation. To meet this informational need, we will combine neurophysiological recordings in primary and non-primary AC of awake, behaving monkeys with computational models to test the following hypotheses (Specific Aim 1): (1) neural representations of concurrent complex tones contain sufficient information to reliably infer their respective F0s and enable their perceptual segregation based on F0 differences;(2) the salience of these representations is increased by introducing additional sound-segregation cues (differences in onset time, level, or spatial location). While results of Specific Aim 1 will characterize neural representations of 'generic'concurrent harmonic sounds, they will not address whether these representations are able to segregate and identify concurrent speech sounds with different spectral envelopes. Thus, in Specific Aim 2, we will test the hypothesis that concurrent vowels differing in F0 can be successfully segregated and identified based on neural responses in AC, using spectral and temporal pattern-matching models (classifiers). Our approach is unique and clinically relevant in that it bridges the gap between single-neuron recordings in experimental animals and noninvasive recordings in humans. Results of this project will enhance our understanding of speech perception in real-world environments, and will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviate perceptual difficulties in the HI, elderly, and in individuals with certain developmental language disorders.
The current lack of understanding of how the brain separates concurrent sounds represents a major obstacle to addressing the perceptual difficulties of hearing-impaired individuals in complex acoustic environments (e.g., multiple voices in a cafeteria). The experiments described in this proposal will fill this gap by examining neural responses in different areas of auditory cortex to concurrent harmonic and periodic sounds such as those commonly encountered in speech (vowels) and music (notes). A better understanding of neural mechanisms of auditory scene analysis will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviating auditory perceptual difficulties in the hearing-impaired, elderly, and in individuals with certain developmental language disorders.
Showing the most recent 10 out of 29 publications