Language comprehension is central to our experience of the world. However, the ease with which we understand the language we hear belies the real difficulty of the task. In reading, the eyes can only obtain detailed information from a span of a few characters at once. In auditory comprehension, percepts change moment by moment, and listeners must distribute cognitive resources between attending to the current speech stream and maintaining short-term memory of earlier parts of the speech stream. Despite these challenges, comprehenders have a remarkable capacity to integrate a wide variety of information sources moment by moment as context in the sentence comprehension process. Our project focuses on one of the most important aspects of this capacity: the integration of bottom-up sensory input with top-down linguistic and extra- linguistic knowledge to guide inferences about sentence form and meaning. Existing theories have implicitly assumed a partitioning of interactivity that distinguishes the word as a fundamental level of linguistic information processing: word recognition is an evidential process whose output is nonetheless a specific "winner" that "takes all", which is in turn the input to an evidential sentence-comprehension process. It is theoretically possible that this partition is real and is an optimal solution to the problem of language comprehension under gross architectural constraints that favor modularity. On the other hand, it is also possible that this partition has been a theoretical convenience but that, in fact, evidence at the sub-word level plays an important role in sentence processing, and that sentence-level information can in turn affect the recognition of not only words further downstream but also words that have already been encountered. In this project we use computational simulation and behavioral experimentation to explore the implications of removing this partition from theories of probabilistic human sentence comprehension. Our preliminary work demonstrates that, rather than vitiating their explanatory power, allowing interactivity between sentence-level and word-level processing may expand the scope of such theories, accounting for a number of outstanding problems for the notion of sentence comprehension as rational inference. The new work proposed for this project focuses primarily on elucidating this idea of rational sentence comprehension under uncertain input to the problem of eye-movement control during reading. The study of eye movements in reading is an ideal setting for further developing such a theory, because it is well-known that moment-by-moment inferences rapidly feed back to eye movements, but accounts of why given couplings between sentence-comprehension phenomena and eye-movement patterns are observed remain poorly developed. In this project we develop a new model of rational eye-movement control in sentence comprehension based on rich models of probabilistic linguistic knowledge, gradient representations of perceptual certainty, oculomotor constraints, and principles of optimal decision making. We propose a number of behavioral studies designed to test the foundational principles underlying this model, primarily using eye-tracking but also using other paradigms where appropriate. Finally, we propose specific methods to test the ability of the model to predict realistic eye movements in the reading of various types of texts. At the highest level, our work promises to lead us to a new level of refinement in our understanding how the two fundamental processes of word recognition and grammatical analysis-commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by-moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.

Public Health Relevance

This project investigates the means by which humans achieve real-time linguistic understanding from the spoken and written word. Our work investigates how the two fundamental processes of word recognition and grammatical analysis- commonly understood as independent of one another-are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by- moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Research Project (R01)
Project #
5R01HD065829-02
Application #
8206498
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Miller, Brett
Project Start
2010-12-15
Project End
2015-11-30
Budget Start
2011-12-01
Budget End
2012-11-30
Support Year
2
Fiscal Year
2012
Total Cost
$183,101
Indirect Cost
$55,601
Name
University of California San Diego
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
804355790
City
La Jolla
State
CA
Country
United States
Zip Code
92093
Morgan, Emily; Levy, Roger (2016) Abstract knowledge versus direct experience in processing of binomial expressions. Cognition 157:384-402
Schotter, Elizabeth R; Jia, Annie (2016) Semantic and plausibility preview benefit effects in English: Evidence from eye movements. J Exp Psychol Learn Mem Cogn 42:1839-1866
Myslín, Mark; Levy, Roger (2016) Comprehension priming as rational expectation for repetition: Evidence from syntactic processing. Cognition 147:29-56
Schotter, Elizabeth R; Leinenger, Mallorie (2016) Reversed preview benefit effects: Forced fixations emphasize the importance of parafoveal vision for efficient reading. J Exp Psychol Hum Percept Perform 42:2039-2067
Rayner, Keith; Schotter, Elizabeth R; Masson, Michael E J et al. (2016) So Much to Read, So Little Time: How Do We Read, and Can Speed Reading Help? Psychol Sci Public Interest 17:4-34
Schotter, Elizabeth R; Lee, Michelle; Reiderman, Michael et al. (2015) The effect of contextual constraint on parafoveal processing in reading. J Mem Lang 83:118-139
Abbott, Matthew J; Angele, Bernhard; Ahn, Y Danbi et al. (2015) Skipping syntactically illegal the previews: The role of predictability. J Exp Psychol Learn Mem Cogn 41:1703-14
Ma, Guojie; Li, Xingshan; Rayner, Keith (2015) Readers extract character frequency information from nonfixated-target word at long pretarget fixations during Chinese reading. J Exp Psychol Hum Percept Perform 41:1409-19
Schotter, Elizabeth R; Bicknell, Klinton; Howard, Ian et al. (2014) Task effects reveal cognitive flexibility responding to frequency and predictability: evidence from eye movements in reading and proofreading. Cognition 131:1-27
Leinenger, Mallorie (2014) Phonological coding during reading. Psychol Bull 140:1534-55

Showing the most recent 10 out of 27 publications