This proposal advances our understanding of how the brain separates concurrent sounds such as multiple voices. Concurrent-sound segregation is a basic perceptual ability, and is necessary for successful speech perception in complex acoustic environments. Deficits in this ability are major complaints in hearing-impaired (HI) listeners. Neural mechanisms underlying concurrent-sound segregation are poorly understood. Characterizing these mechanisms is a prerequisite for more effectively alleviating listening difficulties in the HI. One of the most powerful cues for the segregation of concurrent harmonic sounds is a difference in their fundamental frequencies (F0s). The F0s of concurrent sounds can theoretically be extracted using 'spectral'or 'temporal'pattern-matching mechanisms, which operate on neural representations of harmonic structure (rate- place profiles) or F0-related periodicities (periodicity-place profiles), respectively. While studies have begun examining these profiles at peripheral and subcortical levels, it is unknown whether neural representations of simultaneous harmonic sounds at the level of auditory cortex (AC) contain sufficient information to enable their perceptual segregation. To meet this informational need, we will combine neurophysiological recordings in primary and non-primary AC of awake, behaving monkeys with computational models to test the following hypotheses (Specific Aim 1): (1) neural representations of concurrent complex tones contain sufficient information to reliably infer their respective F0s and enable their perceptual segregation based on F0 differences;(2) the salience of these representations is increased by introducing additional sound-segregation cues (differences in onset time, level, or spatial location). While results of Specific Aim 1 will characterize neural representations of 'generic'concurrent harmonic sounds, they will not address whether these representations are able to segregate and identify concurrent speech sounds with different spectral envelopes. Thus, in Specific Aim 2, we will test the hypothesis that concurrent vowels differing in F0 can be successfully segregated and identified based on neural responses in AC, using spectral and temporal pattern-matching models (classifiers). Our approach is unique and clinically relevant in that it bridges the gap between single-neuron recordings in experimental animals and noninvasive recordings in humans. Results of this project will enhance our understanding of speech perception in real-world environments, and will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviate perceptual difficulties in the HI, elderly, and in individuals with certain developmental language disorders.

Public Health Relevance

The current lack of understanding of how the brain separates concurrent sounds represents a major obstacle to addressing the perceptual difficulties of hearing-impaired individuals in complex acoustic environments (e.g., multiple voices in a cafeteria). The experiments described in this proposal will fill this gap by examining neural responses in different areas of auditory cortex to concurrent harmonic and periodic sounds such as those commonly encountered in speech (vowels) and music (notes). A better understanding of neural mechanisms of auditory scene analysis will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviating auditory perceptual difficulties in the hearing-impaired, elderly, and in individuals with certain developmental language disorders.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC000657-21
Application #
8683140
Study Section
Auditory System Study Section (AUD)
Program Officer
Platt, Christopher
Project Start
1990-08-01
Project End
2016-06-30
Budget Start
2014-07-01
Budget End
2015-06-30
Support Year
21
Fiscal Year
2014
Total Cost
Indirect Cost
Name
Albert Einstein College of Medicine
Department
Neurology
Type
Schools of Medicine
DUNS #
City
Bronx
State
NY
Country
United States
Zip Code
10461
Fishman, Yonatan I; Kim, Mimi; Steinschneider, Mitchell (2017) A Crucial Test of the Population Separation Model of Auditory Stream Segregation in Macaque Primary Auditory Cortex. J Neurosci 37:10645-10655
Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell (2016) Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex. eNeuro 3:
Wagner, Monica; Roychoudhury, Arindam; Campanelli, Luca et al. (2016) Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP). Neurosci Lett 614:119-26
Davidson, Cristin D; Fishman, Yonatan I; Puskás, István et al. (2016) Efficacy and ototoxicity of different cyclodextrins in Niemann-Pick C disease. Ann Clin Transl Neurol 3:366-80
Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E et al. (2015) Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings. Brain Lang 148:37-50
Nourski, Kirill V; Steinschneider, Mitchell; Oya, Hiroyuki et al. (2015) Modulation of response patterns in human auditory cortex during a target detection task: an intracranial electrophysiology study. Int J Psychophysiol 95:191-201
Sussman, E; Steinschneider, M; Lee, W et al. (2015) Auditory scene analysis in school-aged children with developmental language disorders. Int J Psychophysiol 95:113-24
Sussman, Elyse S; Steinschneider, Mitchell (2015) Advances in auditory neuroscience. Int J Psychophysiol 95:63-4
Fishman, Yonatan I; Steinschneider, Mitchell; Micheyl, Christophe (2014) Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis. J Neurosci 34:12425-43
Fishman, Yonatan I (2014) The mechanisms and meaning of the mismatch negativity. Brain Topogr 27:500-26

Showing the most recent 10 out of 29 publications