The assignment of meaning to sensory stimuli lies at the foundation of human cognition. Reading written words is an excellent domain through which to study the neural bases of this cognitive process. With support from the National Science Foundation, Maximilian Riesenhuber, Ph.D., of Georgetown University, is integrating behavioral approaches with advanced brain imaging techniques to learn the answer to fundamental questions about how the brain processes written words and learns their meaning. Given the cultural recency of reading and the variability of lexica across languages, reading necessarily depends on specific neural representations that are acquired through experience with written words. For example, the written word 'apple' must be connected to semantic concept information learned previously, from seeing or tasting an actual apple. The underlying neural mechanisms that support the learning of orthographic and semantic information and their integration with perceptual learning are still poorly understood. While researchers have identified an area in the left half of the brain, the so-called 'visual word form area,' that appears to be crucial to reading, scientists debate the precise role of this area: Does it contain a collection of word fragments, a dictionary or an encyclopedia? Are the meanings of words and objects stored together in sensory cortex, or are they linked in higher cognitive areas? How does the brain learn that a particular word refers to a particular concept? How does the brain connect orthographic knowledge, learned in the left half of the brain, to perceptual information that has been shown to be predominantly represented in the right half? Resolving these questions is of key interest for cognitive models of reading and the broader understanding of how the brain makes sense of the world. The project builds on Riesenhuber's previous experimental and computational research on general object recognition and learning and recent results showing that the human brain contains a visual word dictionary in the 'visual word form area.' The first study is examining how learning new words refines the brain's visual word representation. The second is examining how novel words are linked to meaning in the brain, testing the hypothesis that the visual word dictionary contains only orthographic information, and that the meanings of words are stored in other brain areas. The third is investigating how orthographic, semantic, and perceptual information are integrated; and how the two brain hemispheres cooperate in linking pictures of objects to verbal labels.

The project not only increases the understanding of normal visual word recognition, and how the brain learns to represent objects and associate semantic meaning; but it also provides a framework that can be used to study topics such as reading acquisition, second language learning, and disordered reading. In addition, the techniques developed for this line of research can be translated to study the neural representation of different writing systems, specifically logographic scripts such as Chinese, and the integration of perceptual and semantic information in sign language. The research project is leveraged extensively to train the next generation of scientists, at the high school, undergraduate, and graduate levels. The project includes a summer outreach program, emphasizing the involvement of underrepresented groups through a partnership with Howard University in Washington, DC. In addition, the project presents opportunities for high school students with an interest in cognitive neuroscience to gain hands-on research experience, through a partnership with Thomas Jefferson High School for Science and Technology in Alexandria, VA.

Project Report

Learning to assign meaning to sensory stimuli lies at the foundation of human cognition. Reading written words is an excellent domain through which to study the neural bases of this cognitive process. For example, when learning the meaning of a written word such as "apple", brain plasticity mechanisms must learn a neural representation for the word "apple", and then link this representation to semantic concept information learned previously, from seeing or tasting an actual apple. The goal of our project was to use behavioral training studies in combination with advanced brain imaging to critically advance our understanding of how the brain accomplishes this feat. The project produced several key results. First, the project resolved a debate in the literature regarding the specificity of the so-called "visual word form area" (VWFA), a key brain area associated with reading. While our (and others’) earlier results had argued that the VWFA is a part of visual cortex with neurons specialized for the processing of the written word, others had posited that the VWFA was not specific to reading but fulfilled more general perceptual functions. We demonstrated that the VWFA indeed showed word selectivity when identified in each participant individually, but that variability in the location and size of the VWFA across individuals (see image) caused this selectivity to be washed out if defining the VWFA at the group level or based on coordinates from the literature, as done in studies that had argued against word selectivity in the VWFA. This variability of the VWFA across participants fits well with the idea that learning to read engages brain plasticity in high-level visual cortex and that due to the diversity in people’s pre-existing visual representations, this learning of a word-selective brain area (the VWFA) can happen in slightly different locations in each individual. To further test the idea that the VWFA contains an "orthographic lexicon" acquired through experience in which different words are represented by different neurons and each neuron is selective for just one word, we conducted a learning study in which participants were each trained to learn 150 nonsense words (such as "soat"). We predicted that learning novel words should selectively increase neural specificity for these words in the VWFA. Indeed, while before training there was little selectivity for the trained words in the VWFA, following training, neuronal representations for the trained words (but not for untrained words) were as selective as for real words. Interestingly, this change in selectivity was specific to the VWFA, i.e., not found in any other brain area. Thus, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain’s visual dictionary. In a final step, we investigated how the brain learns and stores the meanings associated with words. In particular, based on our work in visual object recognition, we posited that learning the meaning of novel words would involve plasticity at two levels: i) adding the words to the brain’s visual dictionary in the VWFA, and ii) linking up the representations for these words with their meaning. We trained a group of participants to associate conceptual labels ("monkey", "wrench", "elephant" etc.) with particular sets of novel words. We found that, as in our previous learning study, these new words were selectively represented in the VWFA following training. As predicted, however, the VWFA did not appear to store the meaning of the new words. Rather, regions more towards the front of the brain, in anterior temporal cortex (ATC), were shown to encode word meaning. Intriguingly, while these representations in ATC did not appear to generalize from orthographic stimuli to images, a region in frontal cortex showed concept selectivity for both written words and images referring to the same concept (e.g., "donkey"). The project not only increased our understanding of normal visual word recognition, and how the brain learns to represent objects and associate semantic meaning; but it also provided a framework that can be used to study topics such as reading acquisition, second language learning, and disordered reading. In fact, in a follow-up, NIH-funded study, we already leveraged the project’s results and techniques to gain insight into the neural bases of reading impairment in dyslexia. Our finding of learning-induced concept selectivity in ATC is also of interest for clinical studies, in particular of semantic dementia, which have consistently implicated ATC in the storage of concept information. On a more general level, the techniques developed for this project can be translated to study the neural representation of different writing systems, specifically logographic scripts such as Chinese, and the integration of perceptual and semantic information in sign language. The research project has provided numerous training opportunities for the next generation of scientists, at the high school, undergraduate, and postdoctoral levels, including several individuals from groups underrepresented in the sciences.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
1026934
Program Officer
alumit ishai
Project Start
Project End
Budget Start
2010-09-15
Budget End
2014-08-31
Support Year
Fiscal Year
2010
Total Cost
$650,666
Indirect Cost
Name
Georgetown University
Department
Type
DUNS #
City
Washington
State
DC
Country
United States
Zip Code
20057