Experiments will be performed using modern signal processing techniques to identify and measure the perceptually important characteristics of video signals for speechreading (lipreading) by both normal-hearing and hearing-impaired persons. The effects of spatial filtering as well as that of common video degradations on speechreading performance will be investigated and an index analogous to the Articulation Index will be developed for predicting speechreading performance from physical measurements. Methods for synthesizing video speech signals will be developed and used in perceptual studies designed to identify the physical characteristics of the signal that are important in speechreading. The development of supplementary visual cues for improved communication by speechreading will be investigated using synthesized video speech signals. The data obtained in the component project will be integrated in order to develop a quantitative model of speechreading at the segmental level.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC000507-07
Application #
2125745
Study Section
Hearing Research Study Section (HAR)
Project Start
1988-07-01
Project End
1996-06-30
Budget Start
1994-07-01
Budget End
1996-06-30
Support Year
7
Fiscal Year
1994
Total Cost
Indirect Cost
Name
CUNY Graduate School and University Center
Department
Other Health Professions
Type
Other Domestic Higher Education
DUNS #
620128194
City
New York
State
NY
Country
United States
Zip Code
10016
Preminger, J E; Lin, H B; Payen, M et al. (1998) Selective visual masking in speechreading. J Speech Lang Hear Res 41:564-75
Dempsey, J J; Levitt, H; Josephson, J et al. (1992) Computer-assisted tracking simulation (CATS). J Acoust Soc Am 92:701-10