The big question the PIs are addressing in this project is how to unobtrusively track silent reading of novice readers so as to be able to use an intelligent tutoring system to aid reading comprehension. This EAGER project focuses on the first steps in answering that question. This pilot project builds on previous work in vision and speech technology, sensor fusion, machine learning, user modeling, intelligent tutors, and eye movements in an effort to identify the feasibility of using eye tracking techniques, along with other information collected from an intelligent reading tutor, to predict reading difficulties of novice/young readers. In particular, the work plan includes collecting gaze data in real-world conditions in the context of using the existing Reading Tutor, designing software to display those traces so that accuracy can be guauged, testing gazepoint accuracy and detecting gaze-speech discrepancies, and using that data to develop heuristics for detecting tracking errors in real time and calibrating eye tracking data to noisy school environments, a primary environment where the augmented Reading Tutor would ultimately be used. The intellectual merit of this project is in identifying and addressing challenges in relating children's gaze data to their silent reading, in making technical contributions to calibrating eye trackers so that they can be used in normal everyday applications, and in setting the stage for intelligent tutors across diverse domains to exploit gaze more broadly.

The project's most important potential broader impacts is in establishing a foundation for exploiting gaze input to build intelligent computing systems that can be used to help children with reading difficulties learn to read and read to learn. If successful, the PIs will develop a larger project that will extent the successful Project Listen Reading Tutor so that it can track readers as they are reading silently and help them with their comprehension -- both comprehension of text itself and strategies for coming to deep understanding.

Project Report

The educational focus of this work is arguably the most important cross-cutting skill in modern society: reading. Reading is a high-bandwidth source of information to the reader, yet the act of reading typically provides little or no information to an automated tutor about what the reader did, or did not, learn from the text. Current methods to assess comprehension of a given text are time-consuming and obtrusive. Unobtrusive methods to monitor reading comprehension in real-time would be invaluable for intelligent tutors and many other applications. This project addressed this goal by using speech and gaze input to analyze oral and silent reading. It focused on children in grades 2-4 at the crucial transition from learning to read, to reading to learn, but results will likely also apply to older children and adults. Intellectual Merit:This project builds on a unique school-deployed platform developed with prior support from the National Science Foundation and the Department of Education. Project LISTEN's automated Reading Tutor listens to children read aloud, and helps them learn to read. It offers a compelling combination of advantages for the proposed work. First, children already use the Reading Tutor regularly at school. This usage made it possible to collect gaze data during authentic educational activities in ecologically valid settings, not just artificial tasks in short lab experiments. Second, the Reading Tutor already logs detailed, timestamped, multi-resolution, longitudinal data to its database, including children's speech and its own actions. The gaze measurements were indexed by millisecond-level timestamps to this data. Third, the Reading Tutor already embeds within-subject randomized trials in its tutorial interactions. This capability enabled controlled experiments to test the effects of different text features on children's gaze and speech patterns. This work built on previous work in vision and speech technology, sensor fusion, machine learning, user modeling, intelligent tutors, cognitive psychology, and decades of research on eye movements in reading. For example, eye-voice span -- the distance from the word being read aloud to the word the reader is looking at -- has been shown to be a sensitive indicator of reading proficiency, comprehension of the text, and word-to-word fluctuation in cognitive load. Analysis drew on existing tools to mine data logged by the Reading Tutor. The project used statistical and machine learning methods to train models on noisy speech and gaze data to monitor engagement, decoding, fluency, vocabulary, and comprehension. Expected contributions of this work include advances in relating gaze data to children's oral and silent reading, thereby establishing a foundation for exploiting gaze input to help children learn to read and read to learn, as well as technical contributions such as methods to maintain eye tracker calibration by exploiting normal application input. Broader impacts: This research incorporated direct participation of students, extended the Reading Tutor into an even more powerful research platform to study intelligent tutoring, guided the improvement of its reading instruction, contributed to the literacy of children who use the Reading Tutor, and paved the way for exploiting gaze input in intelligent tutors across diverse domains. By tracking a student's gaze while reading text on a computer screen, we were able to darken the computer screen when a child's gaze left the screen either from looking away or fidgiting. This "Rumblestrip" feature developed during this project proved to be a useful tool for determining when children's gaze drifted from the screen and reminded the student to sit still or return their attention to the computer screen. This feature has a potential to be used in all areas of education or any field which makes use of computers.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1322174
Program Officer
Janet L. Kolodner
Project Start
Project End
Budget Start
2013-03-01
Budget End
2013-12-31
Support Year
Fiscal Year
2013
Total Cost
$300,000
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213