The purpose of the proposed research is to investigate the organization of speech sound representation across the auditory cortex and the spatiotemporal interactions between cortical areas during speech perception. The brain's ability to parse meaning from incoming acoustic information arises from complex interactions within cortical networks, but how these networks are organized remains unclear. High-density intracranial electrocorticography (ECoG) will be used to record neural activity from speech-selective areas in the auditory cortex of awake, behaving human subjects. These subjects are patients with medication refractory epilepsy who have electrodes implanted based on clinical criteria. By combining multiple computational approaches this work will elucidate (1) the acoustic feature selectivity of single electrodes on the ECoG array, (2) the spatial organization of this feature selectivity, and (3) the spatiotemporal dynamics of neural population responses to speech sounds, with the goal of uncovering emergent properties within the population level analysis that cannot be recovered from single sites alone. Participants will listen passively to sentences spoken by a variety of speakers and of different genders while the ECoG signal is recorded. Additional sound stimuli will include environmental sounds and modulation limited noise ripples. Spectrotemporal receptive field models will be built from the neural responses to these sentences using maximally informative dimensions (MID) analysis and different parameterizations of the sound stimuli, including decomposition of the sounds into their corresponding spectrotemporal modulations and identification of the presence or absence of linguistic parameters such as manner of articulation. This will uncover the selectivity of each sit for low-level and high-level sound features as well as the organization of this selectivity across the anatomy of the cortical surface. Comparing this mapping for different types of sound stimuli will reveal whether organization differs for speech and non-speech sounds, or whether organization is based purely on acoustic properties of the sound signal. Vector autoregressive models and phase coupled oscillator models will be used to describe population-level interactions within areas of the auditory cortex. These interactions may manifest as speech-dependent cell assemblies across space that segregate processing for different phonetic features that cannot be uncovered at the single electrode level. Results from these experiments have implications for understanding neural coding as a whole, and will reveal the importance of population activity versus single electrode computations in speech processing. Elucidating how the brain represents low-level and high-level features important for speech is critical for understanding, diagnosing, and treating communication disorders including delayed language learning, aphasias, dyslexia, and autism. In addition, this work may help to improve brain machine interface design, including cochlear implants.

Public Health Relevance

Human speech perception involves complex neural circuits that interact across space and time to convert acoustic information into syllables, words, and sentences. The proposed study will investigate how speech sounds are represented in the brain and how circuits in the brain interact during speech perception. Understanding these processes could lead to brain-based treatments of language disorders including delayed language learning, autism, aphasias, and dyslexia, and could help in the development of better brain machine interfaces including cochlear implants.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
1F32DC014192-01
Application #
8783620
Study Section
Special Emphasis Panel (ZDC1)
Program Officer
Sklare, Dan
Project Start
2014-07-01
Project End
2017-06-30
Budget Start
2014-07-01
Budget End
2015-06-30
Support Year
1
Fiscal Year
2014
Total Cost
Indirect Cost
Name
University of California San Francisco
Department
Neurosurgery
Type
Schools of Medicine
DUNS #
City
San Francisco
State
CA
Country
United States
Zip Code
94143
Breshears, Jonathan D; Hamilton, Liberty S; Chang, Edward F (2018) Spontaneous Neural Activity in the Superior Temporal Gyrus Recapitulates Tuning for Speech Features. Front Hum Neurosci 12:360
Hamilton, Liberty S; Edwards, Erik; Chang, Edward F (2018) A Spatial Map of Onset and Sustained Responses to Speech in the Human Superior Temporal Gyrus. Curr Biol 28:1860-1871.e4
Hamilton, Liberty S; Chang, David L; Lee, Morgan B et al. (2017) Semi-automated Anatomical Labeling and Inter-subject Warping of High-Density Intracranial Recording Electrodes in Electrocorticography. Front Neuroinform 11:62
Tang, C; Hamilton, L S; Chang, E F (2017) Intonational speech prosody encoding in the human auditory cortex. Science 357:797-801
Hullett, Patrick W; Hamilton, Liberty S; Mesgarani, Nima et al. (2016) Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli. J Neurosci 36:2014-26
Cheung, Connie; Hamiton, Liberty S; Johnson, Keith et al. (2016) The auditory representation of speech sounds in human motor cortex. Elife 5:
Muller, Leah; Hamilton, Liberty S; Edwards, Erik et al. (2016) Spatial resolution dependence on spectral frequency in human speech cortex electrocorticography. J Neural Eng 13:056013