This proposal advances our understanding of how the brain separates concurrent sounds such as multiple voices. Concurrent-sound segregation is a basic perceptual ability, and is necessary for successful speech perception in complex acoustic environments. Deficits in this ability are major complaints in hearing-impaired (HI) listeners. Neural mechanisms underlying concurrent-sound segregation are poorly understood. Characterizing these mechanisms is a prerequisite for more effectively alleviating listening difficulties in the HI. One of the most powerful cues for the segregation of concurrent harmonic sounds is a difference in their fundamental frequencies (F0s). The F0s of concurrent sounds can theoretically be extracted using 'spectral'or 'temporal'pattern-matching mechanisms, which operate on neural representations of harmonic structure (rate- place profiles) or F0-related periodicities (periodicity-place profiles), respectively. While studies have begun examining these profiles at peripheral and subcortical levels, it is unknown whether neural representations of simultaneous harmonic sounds at the level of auditory cortex (AC) contain sufficient information to enable their perceptual segregation. To meet this informational need, we will combine neurophysiological recordings in primary and non-primary AC of awake, behaving monkeys with computational models to test the following hypotheses (Specific Aim 1): (1) neural representations of concurrent complex tones contain sufficient information to reliably infer their respective F0s and enable their perceptual segregation based on F0 differences;(2) the salience of these representations is increased by introducing additional sound-segregation cues (differences in onset time, level, or spatial location). While results of Specific Aim 1 will characterize neural representations of 'generic'concurrent harmonic sounds, they will not address whether these representations are able to segregate and identify concurrent speech sounds with different spectral envelopes. Thus, in Specific Aim 2, we will test the hypothesis that concurrent vowels differing in F0 can be successfully segregated and identified based on neural responses in AC, using spectral and temporal pattern-matching models (classifiers). Our approach is unique and clinically relevant in that it bridges the gap between single-neuron recordings in experimental animals and noninvasive recordings in humans. Results of this project will enhance our understanding of speech perception in real-world environments, and will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviate perceptual difficulties in the HI, elderly, and in individuals with certain developmental language disorders.

Public Health Relevance

The current lack of understanding of how the brain separates concurrent sounds represents a major obstacle to addressing the perceptual difficulties of hearing-impaired individuals in complex acoustic environments (e.g., multiple voices in a cafeteria). The experiments described in this proposal will fill this gap by examining neural responses in different areas of auditory cortex to concurrent harmonic and periodic sounds such as those commonly encountered in speech (vowels) and music (notes). A better understanding of neural mechanisms of auditory scene analysis will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviating auditory perceptual difficulties in the hearing-impaired, elderly, and in individuals with certain developmental language disorders.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Auditory System Study Section (AUD)
Program Officer
Platt, Christopher
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Albert Einstein College of Medicine
Schools of Medicine
United States
Zip Code
Nourski, Kirill V; Steinschneider, Mitchell; Oya, Hiroyuki et al. (2014) Spectral organization of the human lateral superior temporal gyrus revealed by intracranial recordings. Cereb Cortex 24:340-52
Nourski, Kirill V; Steinschneider, Mitchell; McMurray, Bob et al. (2014) Functional organization of human auditory cortex: investigation of response latencies through direct recordings. Neuroimage 101:598-609
Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I (2013) Representation of speech in human auditory cortex: is it special? Hear Res 305:57-73
Steinschneider, Mitchell; Nourski, Kirill V; Kawasaki, Hiroto et al. (2011) Intracranial study of speech-elicited activity on the human posterolateral superior temporal gyrus. Cereb Cortex 21:2332-47
Steinschneider, Mitchell; Fishman, Yonatan I (2011) Enhanced physiologic discriminability of stop consonants with prolonged formant transitions in awake monkeys based on the tonotopic organization of primary auditory cortex. Hear Res 271:103-14
Fishman, Yonatan I; Steinschneider, Mitchell (2009) Temporally dynamic frequency tuning of population responses in monkey primary auditory cortex. Hear Res 254:64-76