The broad objective of this proposal is to investigate how the brain constructs word-form and lexical representations of spoken words from lower-level acoustic-phonetic and phonemic information. The brain's ability to parse meaning from acoustic sensory inputs within a few hundred milliseconds may arise from multiple levels of representation that increase in abstraction as information is processed through local and long-distance cortical circuits in the classical language network. These processes will be studied using an integrated, multimodal neurophysiological approach that combines high-resolution invasive electrocorticography (ECoG) in epilepsy patients implanted with chronic subdural electrodes, and non-invasive magnetoencephalography (MEG), which provides whole-brain coverage of neural activity. The techniques are highly complementary and will provide unprecedented spatiotemporal resolution to characterize lexical processing as it unfolds on the millisecond level across adjacent neural populations. The primary analysis for ECoG and MEG data will involve constructing a neural "state-space", which represents the common activity across electrodes. Recently-developed methods allow the activity to be visualized in the state-space across time as neural trajectories. Participants will hear a list of words, each spoken by four different speakers, and the neural trajectories will be traced from the acoustic onset to the hypothesized lexical and semantic representations between 200-400ms later. It is hypothesized that different instances of the same word will begin in different parts of the state-space (due to acoustic differences across speakers), and will eventually converge into areas that represent neural activity associated with a particular lexical item. These analyses will be performed with data from ECoG in epilepsy patients, and will also be confirmed with healthy controls in MEG. MEG will provide additional information about these trajectories, since it is able to sample the entire cortex, and may detect relevant activity in areas that are not covered by the ECoG grids (for example, areas in the contralateral hemisphere). Finally, since the stimuli will be carefully controlled for linguistic features such as lexical frequency and phonotactic probability, it will b possible to examine how stimulus-level statistical properties are encoded by hypothesized prediction mechanisms in the brain. In general, these approaches will elucidate the processes of abstraction and higher-level representation for words in the brain, and will be among the first studies to characterize the neural code for speech perception from the perspective of dynamic, interactive processes. Understanding how the brain constructs word-level representations that contain a rich, malleable, and complex meaning is crucial for characterizing, diagnosing, and treating disorders that impair language learning and processing. It may also help improve assistive technologies like cochlear implants. The ability to decode neural representations at this level will further our understanding of fundamental brain processes that underlie many aspects of higher cognitive function.
Developmental disorders and traumatic injuries often can impair an individual's ability to understand and produce speech. The proposed studies will investigate how the brain translates an acoustic signal into a rich and complex representation that has meaning. Understanding these neural processes will lead to brain-based treatments for individuals who suffer from conditions that prevent or disrupt meaning from being extracted from speech, such as dyslexia, specific language impairment, aphasias, and autism.
|Leonard, Matthew K; Chang, Edward F (2014) Dynamic speech representations in the human temporal lobe. Trends Cogn Sci 18:472-9|