Research in the brain and cognitive sciences provides overwhelming evidence that even seemingly simple tasks like understanding spoken words rely on the interaction of a variety different types of information (e.g. auditory, phonetic, phonological, lexical, semantic etc.). These interactions are remarkable in that they appear to allow listeners, including many stroke patients with focal damage to brain regions that are responsible representing these types of information, to recognize spoken language, even when speech is unclear due to poor articulation, imperfect speech synthesis, noisy environments, or poor signal quality due to digital reduction or filtering (e.g. when experienced via cochlear implant, hearing aids, or a poor phone connection). There is still vigorous debate about the nature or necessity of many of these interactions due to intrinsic limitations in the interpretability of current behavioral and unimodal imaging paradigms. This proposal addresses these limitations by integrating MRI, MEG and EEG data to provide high spatiotemporal resolution images of evolving brain activation during speech perception tasks. These data will be submitted to Granger causality analysis, which allows researchers to directly examine patterns of cause and effect in the relationship between activation of different brain regions associated with the processing of specific types of information. Using these techniques, the proposed research examines the mechanisms that give rise to: (1) the discrete categorization of speech sounds (categorical perception), (2) frequency or phonotactic effects on the perception of speech sounds, (3) the influence of semantic context on speech perception. Having identified how localized functions interact to produce robust speech perception in unimpaired listeners, these tools will then be turned to examine how preserved processes are reorganized and integrated after unilateral focal brain damage in the 18 months following stroke, to allow the recovery of function in aphasic patients. These data address central, previously irresolvable questions about brain function, the robustness of human speech perception, and the mechanisms that support recovery in aphasic patients. As basic research, the work should have wide-reaching implications for the study, assessment, and rehabilitation of patients with focal damage.
Relevance: Aphasia is a common neurological condition affecting the ability of roughly one million Americans to communicate using language (National Institute on Deafness and Other Communication Disorders, 1997). Despite the fact that aphasia is generally the result of irreversible brain damage, aphasics show differing levels of functional recovery associated and new patterns of brain activation during language tasks (Heiss & Thiel, 2006). The proposed work will characterize patterns of interaction in brain activity in unimpaired listeners and examine how these patterns of activation are integrated with preserved brain function to produce improved language understanding over time in people with aphasia.
|Gow Jr, David W; Nied, A Conrad (2014) Rules from words: a dynamic neural basis for a lawful linguistic process. PLoS One 9:e86212|
|Gow Jr, David W (2012) The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing. Brain Lang 121:273-88|
|Gow Jr, David W; Keller, Corey J; Eskandar, Emad et al. (2009) Parallel versus serial processing dependencies in the perisylvian speech network: a Granger analysis of intracranial EEG data. Brain Lang 110:43-8|
|Gow Jr, David W; Segawa, Jennifer A; Ahlfors, Seppo P et al. (2008) Lexical influences on speech perception: a Granger causality analysis of MEG and EEG source estimates. Neuroimage 43:614-23|