This is the first year funding of a five-year continuing award. The goal of this project is to improve reading achievement of children with reading problems by designing computer-based interactive reading tutors that incorporate new speech and language technologies. The reading tutors will help English- and Spanish-speaking children learn to read by providing classroom teachers and reading specialists with tools to instruct and exercise the set of auditory, visual and linguistic skills needed to read, speech discrimination, speech production, phonological awareness, sound-to-letter mappings, vocabulary, fluency and comprehension. The tutors will be designed, tested and refined in collaboration with reading specialists and instructional designers, and tested with children in special education programs in elementary schools in Boulder Colorado.

The tutors will incorporate new and improved auditory and visual speech recognition and facial animation technologies. Five partner sites - Oregon Graduate Institute (OGI), Universidad de las Americas, Puebla (UDLA), University of California, Santa Cruz (UCSC), University of California, San Diego (UCSD) and the University of Colorado (CU), will develop speech and language technologies. Research and development of children's speech recognizers will be conducted at UDLA for Spanish and at OGI for English. In addition, these sites will design and develop speech corpora to enable recognition research. UCSD will conduct research leading to development of head tracking and speech reading systems, and design and develop video corpora to enable this research. UCSC will conduct research leading to development of new animated faces with improved animation capabilities. System integration will be conducted at OGI, which will integrate auditory and visual recognition systems and facial animation systems into the CSLU Toolkit. CU will develop English reading tutors in collaboration with teachers, instructional designers and students, and conduct evaluations of project outcomes. UDLA will also develop and test Spanish versions of the tutors.

The project is expected to produce significant advances in auditory and visual recognition technologies, including accurate recognition of children's speech, accurate recognition of visual features of speech, and the first real-time integration of auditory and visual speech recognition in language training applications. In addition, the PI and his team will achieve a new level of understanding of the structure of children's speech, and the processing of auditory and visual information in reading. Facial animation is expected to play a major role in engaging children, enabling them to enjoy the learning experience more and therefore spend more time on task. The PI expects to demonstrate that facial animation using visible articulators will improve speech discrimination and speech production skills, improved phonological awareness and improved reading. By integrating auditory and visual speech recognition and speech generation technologies into animated agents, and designing reading tutors that incorporate these agents in a well designed reading program, the PI hopes to improve reading achievement in schools. To optimize this outcome, the PI is working closely with reading specialists to incorporate their experience and best practices; and by developing formative and summative evaluation plans that assure fair and accurate assessment of the outcomes of the planned interventions.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0086107
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2000-09-01
Budget End
2007-10-31
Support Year
Fiscal Year
2000
Total Cost
$4,145,500
Indirect Cost
Name
University of Colorado at Boulder
Department
Type
DUNS #
City
Boulder
State
CO
Country
United States
Zip Code
80309