The proposed research involves the contribution of visible information in face-to-face communication and how it is combined with auditory information in bimodal speech perception. The experimental research methodology utilizes a strength-inference strategy of hypothesis, independent manipulations of multiple sources of information, and the testing of mathematical models against the results of individual participants. Synthetic speech will allow the auditory and visual signals to be manipulated directly, an experimental feature central to the study of psychophysicals and perception. In addition, expanded factorial designs are used to provide the moist powerful test of quantitative models of perceptual recognition. Expanded factorial designs are used to study how auditory speech and visual speech are processed alone and in combination, and under different degrees of ambiguity. Experiments are proposed to clarify the classic McGurk effect , to assess the contribution of segment frequency in the language and the psychophysical properties of the auditory and visual speech, and to contrast the influence of visible speech with written text in terms of how it is integrated with auditory speech. Experiments are also proposed to test whether previous results and theoretical conclusions based on syllable perception extend to meaningful items, such as words and sentences. Experiments will evaluate the integration of paralinguistic information in bimodal speech perception and the relative influence on dynamic and static sources of visible information in speechreading and bimodal speech perception. To further substantiate the model testing, Bayesian selection as well as RMSD goodness-of-fit criteria will be used in the evaluation of extant models. Many communication environments involve a noisy auditory channel, which degrades speech perception and recognition. Having available speech from the talker's face improves intelligibility in these situations. Visible speech also supplements other (degraded) sources of information for persons with hearing loss. The use of visible speech and its combination with auditory speech is therefore critical for improving universal access to spoken language. It has potential to improve the quality of speech of persons with perception and production deficits, 2) enhance learning and communication, 3) provide remedial training for poor readers, and 4) facilitate human-machine interactions.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC000236-20
Application #
6634422
Study Section
Special Emphasis Panel (ZRG1-BBBP-3 (01))
Program Officer
Shekim, Lana O
Project Start
1983-12-01
Project End
2005-02-28
Budget Start
2003-03-01
Budget End
2004-02-29
Support Year
20
Fiscal Year
2003
Total Cost
$203,826
Indirect Cost
Name
University of California Santa Cruz
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
125084723
City
Santa Cruz
State
CA
Country
United States
Zip Code
95064
Chen, Trevor H; Massaro, Dominic W (2008) Seeing pitch: visual information for lexical tones of Mandarin-Chinese. J Acoust Soc Am 123:2356-66
Massaro, Dominic W; Chen, Trevor H (2008) The motor theory of speech perception revisited. Psychon Bull Rev 15:453-7;discussion 458-62
Massaro, Dominic W; Bosseler, Alexis (2006) Read my lips: The importance of the face in a computer-animated tutor for vocabulary learning by children with autism. Autism 10:495-510
Massaro, Dominic W; Light, Joanna (2004) Using visible speech to train perception and production of speech for individuals with hearing loss. J Speech Lang Hear Res 47:304-20
Chen, Trevor H; Massaro, Dominic W (2004) Mandarin speech perception by ear and eye follows a universal principle. Percept Psychophys 66:820-36
Bosseler, Alexis; Massaro, Dominic W (2003) Development and evaluation of a computer-animated tutor for vocabulary and language learning in children with autism. J Autism Dev Disord 33:653-72
Srinivasan, Ravindra J; Massaro, Dominic W (2003) Perceiving prosody from the face and voice: distinguishing statements from echoic questions in English. Lang Speech 46:1-22
Massaro, D W; Cohen, M M; Campbell, C S et al. (2001) Bayes factor of model selection validates FLMP. Psychon Bull Rev 8:1-17
Massaro, D W; Cohen, M M (2000) Tests of auditory-visual integration efficiency within the framework of the fuzzy logical model of perception. J Acoust Soc Am 108:784-9
Massaro, D W; Cohen, M M (1999) Speech perception in perceivers with hearing loss: synergy of multiple modalities. J Speech Lang Hear Res 42:21-41

Showing the most recent 10 out of 17 publications