The broad objective of this proposal is to investigate how the brain constructs word-form and lexical representations of spoken words from lower-level acoustic-phonetic and phonemic information. The brain's ability to parse meaning from acoustic sensory inputs within a few hundred milliseconds may arise from multiple levels of representation that increase in abstraction as information is processed through local and long-distance cortical circuits in the classical language network. These processes will be studied using an integrated, multimodal neurophysiological approach that combines high-resolution invasive electrocorticography (ECoG) in epilepsy patients implanted with chronic subdural electrodes, and non-invasive magnetoencephalography (MEG), which provides whole-brain coverage of neural activity. The techniques are highly complementary and will provide unprecedented spatiotemporal resolution to characterize lexical processing as it unfolds on the millisecond level across adjacent neural populations. The primary analysis for ECoG and MEG data will involve constructing a neural """"""""state-space"""""""", which represents the common activity across electrodes. Recently-developed methods allow the activity to be visualized in the state-space across time as neural trajectories. Participants will hear a list of words, each spoken by four different speakers, and the neural trajectories will be traced from the acoustic onset to the hypothesized lexical and semantic representations between 200-400ms later. It is hypothesized that different instances of the same word will begin in different parts of the state-space (due to acoustic differences across speakers), and will eventually converge into areas that represent neural activity associated with a particular lexical item. These analyses will be performed with data from ECoG in epilepsy patients, and will also be confirmed with healthy controls in MEG. MEG will provide additional information about these trajectories, since it is able to sample the entire cortex, and may detect relevant activity in areas that are not covered by the ECoG grids (for example, areas in the contralateral hemisphere). Finally, since the stimuli will be carefully controlled for linguistic features such as lexical frequency and phonotactic probability, it will b possible to examine how stimulus-level statistical properties are encoded by hypothesized prediction mechanisms in the brain. In general, these approaches will elucidate the processes of abstraction and higher-level representation for words in the brain, and will be among the first studies to characterize the neural code for speech perception from the perspective of dynamic, interactive processes. Understanding how the brain constructs word-level representations that contain a rich, malleable, and complex meaning is crucial for characterizing, diagnosing, and treating disorders that impair language learning and processing. It may also help improve assistive technologies like cochlear implants. The ability to decode neural representations at this level will further our understanding of fundamental brain processes that underlie many aspects of higher cognitive function.

Public Health Relevance

Developmental disorders and traumatic injuries often can impair an individual's ability to understand and produce speech. The proposed studies will investigate how the brain translates an acoustic signal into a rich and complex representation that has meaning. Understanding these neural processes will lead to brain-based treatments for individuals who suffer from conditions that prevent or disrupt meaning from being extracted from speech, such as dyslexia, specific language impairment, aphasias, and autism.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
1F32DC013486-01A1
Application #
8647627
Study Section
Special Emphasis Panel (ZDC1-SRB-L (41))
Program Officer
Sklare, Dan
Project Start
2013-10-01
Project End
2016-09-30
Budget Start
2013-10-01
Budget End
2014-09-30
Support Year
1
Fiscal Year
2013
Total Cost
$52,190
Indirect Cost
Name
University of California San Francisco
Department
Neurosurgery
Type
Schools of Medicine
DUNS #
094878337
City
San Francisco
State
CA
Country
United States
Zip Code
94143
Conant, David F; Bouchard, Kristofer E; Leonard, Matthew K et al. (2018) Human Sensorimotor Cortex Control of Directly Measured Vocal Tract Movements during Vowel Production. J Neurosci 38:2955-2966
Khoshkhoo, Sattar; Leonard, Matthew K; Mesgarani, Nima et al. (2018) Neural correlates of sine-wave speech intelligibility in human frontal and temporal cortex. Brain Lang 187:83-91
Rao, Vikram R; Leonard, Matthew K; Kleen, Jonathan K et al. (2017) Chronic ambulatory electrocorticography from human speech cortex. Neuroimage 153:273-282
Moses, David A; Mesgarani, Nima; Leonard, Matthew K et al. (2016) Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity. J Neural Eng 13:056004
Leonard, Matthew K; Cai, Ruofan; Babiak, Miranda C et al. (2016) The peri-Sylvian cortical network underlying single word repetition revealed by electrocortical stimulation and direct neural recordings. Brain Lang :
Leonard, Matthew K; Baud, Maxime O; Sjerps, Matthias J et al. (2016) Perceptual restoration of masked speech in human cortex. Nat Commun 7:13619
Leonard, Matthew K; Bouchard, Kristofer E; Tang, Claire et al. (2015) Dynamic encoding of speech sequence probability in human temporal cortex. J Neurosci 35:7203-14
Cibelli, Emily S; Leonard, Matthew K; Johnson, Keith et al. (2015) The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening. Brain Lang 147:66-75
Leonard, Matthew K; Chang, Edward F (2014) Dynamic speech representations in the human temporal lobe. Trends Cogn Sci 18:472-9