Experiments will be performed using modern signal processing techniques to identify and measure the perceptually important characteristics of video signals for speechreading (lipreading) by both normal-hearing and hearing-impaired persons. The effects of spatial filtering as well as that of common video degradations on speechreading performance will be investigated and an index analogous to the Articulation Index will be developed for predicting speechreading performance from physical measurements. Methods for synthesizing video speech signals will be developed and used in perceptual studies designed to identify the physical characteristics of the signal that are important in speechreading. The development of supplementary visual cues for improved communication by speechreading will be investigated using synthesized video speech signals. The data obtained in the component project will be integrated in order to develop a quantitative model of speechreading at the segmental level.
Preminger, J E; Lin, H B; Payen, M et al. (1998) Selective visual masking in speechreading. J Speech Lang Hear Res 41:564-75 |
Dempsey, J J; Levitt, H; Josephson, J et al. (1992) Computer-assisted tracking simulation (CATS). J Acoust Soc Am 92:701-10 |