Communicating using spoken language feels effortless and automatic to healthy listeners with no hearing deficits or language processing problems. But the subjective ease belies the number and complexity of the many operations that -in aggregate- constitute speech perception. Transforming the acoustic signals that arrive at the ear into the abstract representations that underpin language processing requires a large number of little steps. When one or several of these intermediate operations malfunction, pathologies of hearing, speech perception, or language processing can be the consequence. Developing a theoretically well motivated and mechanistic, neurobiologically grounded understanding of this system remains one of the foundational challenges of the cognitive neuroscience of hearing, speech, and language. The research program outlined in this grant proposal strives to further develop a brain-based model of speech perception that is motivated by the insights of linguistic and psychological research, on the one hand, and is sensitive to the physical (acoustic) and neurobiological constraints of speech processing, on the other. The proposed experiments use the noninvasive electrophysiological neuroimaging technique magnetoencephalopgraphy (MEG), paired with magnetic resonance imaging (MRI). MEG is particularly useful because it combines very high temporal resolution (necessary because speech processing is fast) with good spatial resolution (necessary to understand the anatomic organization of the system). We investigate the speech processing system in the context of three specific research aims. The focus of the first aim is to understand more precisely the functional architecture in the brain. In particular, we want to understand the computational contribution of the critical regions mediating the processing of speech, both in perception and production. Furthermore, we test whether the same architectural (dual stream) model helps us understand both the perception of speech (old news) and the covert (internal) and overt production of speech (new news). The studies in the second aim test whether intrinsic brain rhythms (neural oscillations) that one observes (in animal and human studies) have a causal role in speech processing, as has recently been hypothesized. For example, the alignment of slow brain rhythms with the input signal may be necessary to understand speech (by parsing the continuous spoken input into the right 'chunk size'for further analysis). In the third aim, we turn to the perennial puzzle of brain asymmetry and its role in speech processing. We evaluate, building on the studies of oscillations, whether left and right auditory regions execute the same or different analyses of the speech input. As a group, these studies serve to further specify the 'parts list'of auditory and speech processing, with a special emphasis on timing and its implications for health and disease.
Communicating with spoken language is the most common human interaction. The many complex processes that underlie comprehension can malfunction at many levels, leading to potential hearing, speech, or language disorders. Current noninvasive brain recording techniques can help us understand how the speech processing system functions in health and malfunctions in acquired and developmental disorders.
Showing the most recent 10 out of 104 publications