A thorough understanding of language must involve examination of how language is used in its most common setting: between interlocutors. During live conversations, participants usually have access to visual as well as auditory speech and speaker information. While much research has examined how the auditory speech signal supports live conversation, relatively little research has addressed how visual speech (lipread) information aids conversation. This fact is unfortunate given what is now known about the ubiquity. automaticity, and neurophysiological primacy of visual speech perception. The proposed research has been designed to broaden our understanding of how visual speech is used in conversational settings. One way in which shared understanding emerges from conversation is through the cross-speaker language alignment known to exist at all levels of dialogue. Interlocutors align to (partially imitate) each other's speech so as to converge towards a common tempo, intonation, and the more microscopic dimensions of voice onset time and vowel spectra. The proposed research will examine how visual speech information influences speech alignment. This research will also examine two important theoretical issues in the contemporary speech literature, namely: 1) the role of talker information in phonetic perception; and 2) the role of external perceptual information in controlling speech production. The experiments will test whether visual speech information influences speech production alignment in the context of both a simple word shadowing task and a two-participant interactive task. Manipulations will be incorporated to determine whether the talker information that influences production responses is available cross-modally. Another set of experiments will test whether visual speech information can modulate speech production so as to facilitate speech production response times. The results of this research should have important implications for theories of speech and speaker perception, as well as our understanding of multimodal integration, imitation, and the relationship between perception and action. The experiments should also add to our knowledge of the salient information for (Spreading, face recognition, and voice identification. The research will address issues relevant to individuals with hearing impairments, as well as aphasic, prosopagnosic, and phonagnosic patients. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC008957-02
Application #
7460599
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
2007-07-01
Project End
2010-06-30
Budget Start
2008-07-01
Budget End
2009-06-30
Support Year
2
Fiscal Year
2008
Total Cost
$248,566
Indirect Cost
Name
University of California Riverside
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
627797426
City
Riverside
State
CA
Country
United States
Zip Code
92521
Dias, James W; Rosenblum, Lawrence D (2016) Visibility of speech articulation enhances auditory phonetic convergence. Atten Percept Psychophys 78:317-33
Dias, James W; Cook, Theresa C; Rosenblum, Lawrence D (2016) Influences of selective adaptation on perception of audiovisual speech. J Phon 56:75-84
Miller, Rachel M; Sanchez, Kauyumari; Rosenblum, Lawrence D (2013) Is speech alignment to talkers or tasks? Atten Percept Psychophys 75:1817-26
Sanchez, Kauyumari; Dias, James W; Rosenblum, Lawrence D (2013) Experience with a talker can transfer across modalities to facilitate lipreading. Atten Percept Psychophys 75:1359-65
Sanchez, Kauyumari; Miller, Rachel M; Rosenblum, Lawrence D (2010) Visual influences on alignment to voice onset time. J Speech Lang Hear Res 53:262-72
Miller, Rachel M; Sanchez, Kauyumari; Rosenblum, Lawrence D (2010) Alignment to visual speech information. Atten Percept Psychophys 72:1614-25