Language comprehension is central to our experience of the world. However, the ease with which we understand the language we hear belies the real difficulty of the task. In reading, the eyes can only obtain detailed information from a span of a few characters at once. In auditory comprehension, percepts change moment by moment, and listeners must distribute cognitive resources between attending to the current speech stream and maintaining short-term memory of earlier parts of the speech stream. Despite these challenges, comprehenders have a remarkable capacity to integrate a wide variety of information sources moment by moment as context in the sentence comprehension process. Our project focuses on one of the most important aspects of this capacity: the integration of bottom-up sensory input with top-down linguistic and extra- linguistic knowledge to guide inferences about sentence form and meaning. Existing theories have implicitly assumed a partitioning of interactivity that distinguishes the word as a fundamental level of linguistic information processing: word recognition is an evidential process whose output is nonetheless a specific """"""""winner"""""""" that """"""""takes all"""""""", which is in turn the input to an evidential sentence-comprehension process. It is theoretically possible that this partition is real and is an optimal solution to the problem of language comprehension under gross architectural constraints that favor modularity. On the other hand, it is also possible that this partition has been a theoretical convenience but that, in fact, evidence at the sub-word level plays an important role in sentence processing, and that sentence-level information can in turn affect the recognition of not only words further downstream but also words that have already been encountered. In this project we use computational simulation and behavioral experimentation to explore the implications of removing this partition from theories of probabilistic human sentence comprehension. Our preliminary work demonstrates that, rather than vitiating their explanatory power, allowing interactivity between sentence-level and word-level processing may expand the scope of such theories, accounting for a number of outstanding problems for the notion of sentence comprehension as rational inference. The new work proposed for this project focuses primarily on elucidating this idea of rational sentence comprehension under uncertain input to the problem of eye-movement control during reading. The study of eye movements in reading is an ideal setting for further developing such a theory, because it is well-known that moment-by-moment inferences rapidly feed back to eye movements, but accounts of why given couplings between sentence-comprehension phenomena and eye-movement patterns are observed remain poorly developed. In this project we develop a new model of rational eye-movement control in sentence comprehension based on rich models of probabilistic linguistic knowledge, gradient representations of perceptual certainty, oculomotor constraints, and principles of optimal decision making. We propose a number of behavioral studies designed to test the foundational principles underlying this model, primarily using eye-tracking but also using other paradigms where appropriate. Finally, we propose specific methods to test the ability of the model to predict realistic eye movements in the reading of various types of texts. At the highest level, our work promises to lead us to a new level of refinement in our understanding how the two fundamental processes of word recognition and grammatical analysis-commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by-moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.
This project investigates the means by which humans achieve real-time linguistic understanding from the spoken and written word. Our work investigates how the two fundamental processes of word recognition and grammatical analysis- commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by- moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.
Showing the most recent 10 out of 31 publications