The long-term goal of the proposed research is to develop a comprehensive model to explain the role of visual information in speech perception. Speechreading, a form of human information processing, requires an observer to obtain linguistic information from the movements of a talker's face. Virtually all people, and especially people with hearing loss, rely on visual information in difficult listening situations. The major objectives of this research are: 1) to study visual attention to specific regions of face motion during visual speech perception for individuals who differ in speechreading proficiency; and 2) to determine if attention to specific regions of face motion changes when speechreading improves. Project 1: performance on perceptual tasks in visual speech processing will specify parameters of face motion for visual speech stimuli and study how speechreaders attend to, select and use face motion for visual speech perception. Image processing techniques, such as optic flow analysis, will be used to quantify face motion, and test hypotheses about visual phonetic cues for speech perception. Eye-monitoring techniques will be used to precisely determine facial regions attended to by the speechreader. Comparisons will be made among adults who differ in speechreading proficiency. Subjects include those with normal hearing and hearing loss that may have a congenital-, early-, or adult-onset. Project 2: Chances in perceptual capabilities for improved speechreading accuracy will determine 1) if stimulus, feedback, and task variability during practice improve retention and transfer, and 2) if subjects who improve change patterns of attention and selection for facial regions containing visual phonetic cues. Experimental and control subjects with similar hearing and rehabilitation histories will be compared. Scientific knowledge about visual speech perception will contribute to the development of sensory aids, automatic speech recognition systems, and research-based intervention protocols to augment speech perception.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
First Independent Research Support & Transition (FIRST) Awards (R29)
Project #
5R29DC002250-05
Application #
2856599
Study Section
Sensory Disorders and Language Study Section (CMS)
Project Start
1995-01-01
Project End
2000-12-31
Budget Start
1999-01-01
Budget End
2000-12-31
Support Year
5
Fiscal Year
1999
Total Cost
Indirect Cost
Name
University of Illinois Urbana-Champaign
Department
Other Health Professions
Type
Other Domestic Higher Education
DUNS #
041544081
City
Champaign
State
IL
Country
United States
Zip Code
61820
Lansing, C R; Helgeson, C L (1995) Priming the visual recognition of spoken words. J Speech Hear Res 38:1377-86