Considering the diversity and richness of languages spoken around the globe, speech perception is arguably one of the main achievements of the human brain. While there is now broad support among cognitive neuroscientists for the concept of a hierarchy of cortical areas subserving auditory and speech processing, there is substantial disagreement about the roles different brain areas play in speech recognition, how the speech system learns and represents spoken words, and how learning words through one sense (e.g., through reading) can allow the brain to recognize the same words through another sense (e.g., through hearing). With support from the National Science Foundation, we will use advanced functional magnetic resonance imaging (fMRI) to investigate these questions. Specifically, the project is designed to resolve a major ongoing controversy regarding how and where words are represented in the brain by, for the first time, applying fMRI techniques to this field that have been used successfully to answer related questions in the domain of written words. A second study will probe how the brain learns new spoken words and adds them to the auditory lexicon. In the final set of studies, the project will break new ground in our understanding of how the auditory and visual systems are linked in reading, and how this interaction is enabled by cross-sensory learning. Understanding the neural bases of speech processing and cross-sensory learning of language are areas of great interest not only for basic science but also other areas ranging from education and language learning to engineering (by elucidating effective learning algorithms for deep multisensory hierarchies, e.g., for automatic speech recognition) as well as biomedical fields (by building a foundation to study a range of disorders, including dyslexia and language comprehension deficits). The research project will form an opportunity to train the next generation of scientists, at the graduate, undergraduate, and high school levels.
In more scientific detail, the overarching goal of the proposed project is to study the existence and plasticity of auditory lexica in the human brain, and to understand coupling of written and auditory speech representations that permit cross-modal transfer of lexical learning. The project has three Aims: Aim 1 addresses the current controversy regarding the existence and location of auditory word representations in the brain. Translating techniques we previously developed to identify a "visual lexicon" in the "Visual Word Form Area", we will use sensitive fMRI rapid adaptation (fMRI-RA) and other advanced fMRI techniques to test the hypothesis of a (receptive) auditory lexicon within an analogous "Auditory Word Form Area" in left anterior superior temporal cortex. At the same time, we will test whether another lexicon exists in motor-related speech areas of the auditory dorsal stream representing articulatory word forms that are automatically activated by speech perception via an "inverse model". Aim 2 is designed to probe the plasticity invoked by the addition of novel words to the auditory lexicon. Aim 3 studies the interaction of written language with the auditory system. Prior studies have shown that reading activates phonological representations in proficient readers, and that this phonological recoding is crucial for reading acquisition. We will test the novel hypothesis that written words cause widespread activation of the auditory system and that training on novel written words can drive word-selective rewiring in the auditory lexical system.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.