Context changes the way we interpret sights and sounds. A shade of color halfway between yellow and green looks more yellow when applied to a picture of a banana, but more green when applied to a lime. An acoustic pattern halfway between "p" and "b" is interpreted as "p" following "sto-" but as "b" following "sta-". But does context actually alter perception of sights and sounds, or only their interpretation? Cognitive scientists have long debated when and how "bottom-up" input signals (such as speech) are integrated with "top-down" information (context, or knowledge in memory). Do early perceptual processes protect a "correct," context-independent record of signals, or do perceptual processes immediately mix bottom-up and top-down information? One view is that accurate perception requires early separation of bottom-up and top-down information and late integration. An alternative is that early mixing of bottom-up and top-down information would make systems more efficient, by allowing context to immediately guide processing. In studies of language comprehension, this timing question is unsettled because of conflicting evidence from two measures of moment-to-moment processing. Studies tracking people's eye movements on objects upon verbal instructions support immediate integration: helpful information appears to be used as soon as it is available. Studies using ERPs (event related potentials, which measure cortical activity via scalp electrodes) suggest delayed integration: early brain responses appear to be affected only by bottom-up information. Results from the two measures have been difficult to compare because they have relied on very different experimental designs. In the proposed research the investigator will study the timing of top-down integration in human sentence processing using experimental designs that allow simultaneous comparisons of eyetracking and ERPs, with the goal of determining when and how top-down context is integrated with bottom-up signal information.
The proposed work has important implications for the design of language technology. In contrast to computer systems, humans efficiently exploit top-down context, and quickly learn to adapt to new contexts. An obstacle to making computer systems as adaptable as humans is that we do not fully understand how humans balance bottom-up signals and top-down context. The proposed research also has implications for understanding and treating language impairments. For example, understanding how normal perceivers balance and integrate signal and context may help identify subtle bottom-up impairments that lead to unusual reliance on context. The investigator is committed to integrating research and training activities in this CAREER project, and will actively involve undergraduate and graduate students in the research. The investigator will also develop courses designed to prepare students for independent research by providing hands-on training in cognitive theories and time course methodologies.
In this project, we gained new insights into how we integrate words we see or hear with memory, knowledge, and context. Some models of language understanding propose that for computational efficiency, the brain should use a "bottom-up" processing approach to language ("syntax first", before meaning or context). On this approach, language is first processed in isolation from non-linguistic information, knowledge, or context. Ths allows fast, simple rules-of-thumb for syntactic parsing to generate the correct structure of a sentence most of the time; when it doesn't work, more intensive "reanalysis" is needed. In contrast, "constraint-based" approaches propose maximal efficiency would come by integrating bottom-up information (each word as you hear or read it) with top-down information (knowledge, memory, visual context) from the very beginning of processing. In the scientific literature, there is strong support for both views. Data from electroencephalography (EEG) studies supports the "syntax first" view; there are early neural responses to syntactic errors that seem to occur even when the listener/reader knows the error is very likely to occur. Data from eye tracking studies supports the "constraint based" view: when, for example, visual context is provided that can help guide syntactic parsing, listeners/readers appear to make use of such context as early as we can detect (within milliseconds). The dilemma is that the types of context and expectations that used in the two experimental approaches (EEG vs. eye tracking) have been extremely different. A primary goal of this project was to develop methods for more direct and even simultaneous comparison of the two types of data. One line of research in this project began with attempts to replicate key EEG findings supporting "syntax-first". In one experiment, we provided subjects with a weak or strong expectation for syntactic errors by making them infrequent or frequent (that is, relatively unexpected or expected). A previous study found an early EEG response that was evoked by errors no matter how frequent they were, which supported the syntax-first idea that expectations cannot influence early steps of syntactic processing. We did not replicate this result. Instead, we found that the early EEG response reversed when errors were frequent -- that is, we found the EEG response that is supposed to index grammatical errors in response to grammatical sentences when errors were expected. This strongly supports the constraint-based view: rather than blindly checking syntax without considering context, the language system is finely tuned to context, and the EEG responses indicate that the brain is constantly attempting to predict what information is coming next. In another line of research, we used eye tracking to better understand when and how bottom-up (external) information is integrated with top-down (internal, or contextual) information. Many recent studies using eye movements have suggested that the human language processing system makes "optimal" use of predictive context. For example, on hearing "the boy will eat…" people appear to look to edible objects and not to inedible objects. We examined such cases more closely. To use the same example ("eat"), we found that predictive context does not wipe out later bottom-up impact; on hearing "the boy will eat the white…", people are most likely to look at a picture of white cake, but are more likely to look at a white car or brown cake than items that are not related to either the context (eat) or the perceptual feature specified (white). In another study, we found that even when the "agent" (the actor of the verb) of a verb has been specified (e.g., "Toby" in "Toby will arrest…"), people are equally likely to make anticipatory eye movements to pictures of good patients (the noun that will have the action done to them, such as "crook") or good agents ("policeman"), which conflicts with earlier studies suggesting optimal use of context (since an agent, Toby, has already been specified, it is not optimal to fixate another agent). When we used passive constructions ("Toby will be arrested by the…"), which give more time and more syntactic cues following the first noun, we still found initially equal looks to good agents and patients, but a strong advantage for the good agent (appropriate for the context) emerged before the beginning of that word in the utterance. These results may seem to have little to do with practical problems or everyday language use, but in fact they reveal the relative balance between bottom-up and top-down information, and inform our understanding of healthy, adult language processing, as well as language development and changes related to aging or language disorders. Another aspect of this project was the use of our time-course methods for "translational" research into childhood language impairments. This work is helping guide the search for genetic bases of language impairments, and leading to new interventions for language-related learning disabilities.