Language comprehension is central to our experience of the world. However, the ease with which we understand the language we hear belies the real difficulty of the task. In reading, the eyes can only obtain detailed information from a span of a few characters at once. In auditory comprehension, percepts change moment by moment, and listeners must distribute cognitive resources between attending to the current speech stream and maintaining short-term memory of earlier parts of the speech stream. Despite these challenges, comprehenders have a remarkable capacity to integrate a wide variety of information sources moment by moment as context in the sentence comprehension process. Our project focuses on one of the most important aspects of this capacity: the integration of bottom-up sensory input with top-down linguistic and extra- linguistic knowledge to guide inferences about sentence form and meaning. Existing theories have implicitly assumed a partitioning of interactivity that distinguishes the word as a fundamental level of linguistic information processing: word recognition is an evidential process whose output is nonetheless a specific "winner" that "takes all", which is in turn the input to an evidential sentence-comprehension process. It is theoretically possible that this partition is real and is an optimal solution to the problem of language comprehension under gross architectural constraints that favor modularity. On the other hand, it is also possible that this partition has been a theoretical convenience but that, in fact, evidence at the sub-word level plays an important role in sentence processing, and that sentence-level information can in turn affect the recognition of not only words further downstream but also words that have already been encountered. In this project we use computational simulation and behavioral experimentation to explore the implications of removing this partition from theories of probabilistic human sentence comprehension. Our preliminary work demonstrates that, rather than vitiating their explanatory power, allowing interactivity between sentence-level and word-level processing may expand the scope of such theories, accounting for a number of outstanding problems for the notion of sentence comprehension as rational inference. The new work proposed for this project focuses primarily on elucidating this idea of rational sentence comprehension under uncertain input to the problem of eye-movement control during reading. The study of eye movements in reading is an ideal setting for further developing such a theory, because it is well-known that moment-by-moment inferences rapidly feed back to eye movements, but accounts of why given couplings between sentence-comprehension phenomena and eye-movement patterns are observed remain poorly developed. In this project we develop a new model of rational eye-movement control in sentence comprehension based on rich models of probabilistic linguistic knowledge, gradient representations of perceptual certainty, oculomotor constraints, and principles of optimal decision making. We propose a number of behavioral studies designed to test the foundational principles underlying this model, primarily using eye-tracking but also using other paradigms where appropriate. Finally, we propose specific methods to test the ability of the model to predict realistic eye movements in the reading of various types of texts. At the highest level, our work promises to lead us to a new level of refinement in our understanding how the two fundamental processes of word recognition and grammatical analysis-commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by-moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.

Public Health Relevance

This project investigates the means by which humans achieve real-time linguistic understanding from the spoken and written word. Our work investigates how the two fundamental processes of word recognition and grammatical analysis- commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by- moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Research Project (R01)
Project #
5R01HD065829-02
Application #
8206498
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Miller, Brett
Project Start
2010-12-15
Project End
2015-11-30
Budget Start
2011-12-01
Budget End
2012-11-30
Support Year
2
Fiscal Year
2012
Total Cost
$183,101
Indirect Cost
$55,601
Name
University of California San Diego
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
804355790
City
La Jolla
State
CA
Country
United States
Zip Code
92093
Schotter, Elizabeth R; Bicknell, Klinton; Howard, Ian et al. (2014) Task effects reveal cognitive flexibility responding to frequency and predictability: evidence from eye movements in reading and proofreading. Cognition 131:1-27
Rayner, Keith; Schotter, Elizabeth R (2014) Semantic preview benefit in reading English: The effect of initial letter capitalization. J Exp Psychol Hum Percept Perform 40:1617-28
Ma, Guojie; Li, Xingshan; Rayner, Keith (2014) Word segmentation of overlapping ambiguous strings during Chinese reading. J Exp Psychol Hum Percept Perform 40:1046-59
Leinenger, Mallorie (2014) Phonological coding during reading. Psychol Bull 140:1534-55
Li, Xingshan; Bicknell, Klinton; Liu, Pingping et al. (2014) Reading is fundamentally similar across disparate writing systems: a systematic characterization of how words and characters influence eye movements in Chinese reading. J Exp Psychol Gen 143:895-913
Schotter, Elizabeth R; Jia, Annie; Ferreira, Victor S et al. (2014) Preview benefit in speaking occurs regardless of preview timing. Psychon Bull Rev 21:755-62
Plummer, Patrick; Perea, Manuel; Rayner, Keith (2014) The influence of contextual diversity on eye movements in reading. J Exp Psychol Learn Mem Cogn 40:275-83
Schotter, Elizabeth R (2013) Synonyms Provide Semantic Preview Benefit in English. J Mem Lang 69:
Leinenger, Mallorie; Rayner, Keith (2013) Eye Movements while Reading Biased Homographs: Effects of Prior Encounter and Biasing Context on Reducing the Subordinate Bias Effect. J Cogn Psychol (Hove) 25:665-681
Smith, Nathaniel J; Levy, Roger (2013) The effect of word predictability on reading time is logarithmic. Cognition 128:302-19

Showing the most recent 10 out of 13 publications