This project's goal is to develop a synthetic talking face. Humans developed sophisticated abilities to perceive and integrate auditory and visual (AV) speech information long before they were required to read printed text presented by computers. Seeing as well as hearing speech reduces the cognitive workload and improves comprehension over only hearing the talker. To realize the advantages of AV speech for human-computer interactions requires synthesizing visual speech, thereby providing an unlimited supply of visual speech images without having to pre-record data. The approach here is to drive optical speech synthesis with speech acoustics. Computational methods obtain models of the transformation from acoustics to optics. The method capitalizes on the speech production coarticulatory information captured by diphones to produce naturalistic visual speech images. The method is applied directly to natural acoustic speech features to obtain coordination between acoustic and optical signals. The synthesized visual speech is based on a texture-mapped wire frame model. A natural speech corpus to base the synthesis is being obtained via simultaneously recorded 3-D optical, audio, and video data. Synthesis development is guided by human perceptual testing. The DVD archived corpus will be disseminated.
The project will lead to expanded access to information and improvement in obtaining knowledge by diverse groups of individuals, for example: children still acquiring literacy skills; adults with inadequate literacy; individuals who are using a second language; and individuals with hearing losses who rely on audiovisual speech. Results will be disseminated broadly through professional outlets. Graduate and undergraduate students will participate.