Everyday conversation typically involves both seeing and hearing our communication partners, and affords a vital means by which older adults stay engaged with their families and communities. Our previous research has contributed to fundamental knowledge of how aging affects both lipreading ability and spoken language comprehension. In this grant cycle, we will seek to explain why some individuals recognize much more of a talker's speech when it is presented audiovisually than would be predicted based on their vision-only and auditory-only performance, and based on this knowledge, we aim to develop interventions to enhance older adults' spoken language comprehension. The proposed studies are motivated by a model of audiovisual integration that takes into account both signal and lexical factors that are not incorporated into most existing models. Importantly, we will use an equation describing the summation of partial auditory and visual information to unconfound audiovisual integration and unimodal word recognition abilities. In order to accomplish our first Specific Aim, we will manipulate the properties of auditory and visual speech signals in order to localize the sources of age-related differences in everyday spoken language comprehension. We have developed matrix-style word recognition tests that will allow us to determine how age differences in processing speed and working memory affect AV integration at the level of phoneme and viseme activation. For our second Specific Aim, we have developed two new open-set word tests, one that systematically varies the number of activated word candidates and another that systematically varies word frequency. Using these tests, we will manipulate lexical properties affecting the perceptual confusability of different words and determine how cognitive factors relate to the activation levels of competing word candidates. For our final Specific Aim, we will use a test of audiovisual spoken language comprehension developed in our laboratory in our effort to identify variables that can predict age and individual differences in the benefits from slower speech. Our goal is to determine how slowed speech can best enhance older adults' comprehension of extended spoken passages and to predict who will benefit the most from such interventions. Accomplishment of these aims will have both theoretical and clinical significance, allowing us to assess a theoretical model of audiovisual speech processing and to test ways for enhancing spoken language comprehension in older adults. Overall, this project represents a unique opportunity that brings together an aural rehabilitation specialist, cognitive psychologists, and a clinical research audiologist for the purpose of developing aural rehabilitation procedures for older adults that are grounded in a well-tested theoretical model and in an understanding of how cognitive measures relate to audiovisual speech comprehension.

Public Health Relevance

Speech recognition improves markedly when individuals can both see and hear a talker, compared with hearing alone. For older adults, the improvement often equals or exceeds that obtained from hearing aids and other listening devices. We propose an innovative theoretical model of audiovisual integration and will use it as a framework for establishing how aging affects audiovisual speech recognition, as well as to determine how best to enhance older adults' spoken language comprehension.

National Institute of Health (NIH)
National Institute on Aging (NIA)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
St Hillaire-Clarke, Coryse
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Washington University
Schools of Medicine
Saint Louis
United States
Zip Code
Spehar, Brent; Tye-Murray, Nancy; Myerson, Joel et al. (2016) Real-Time Captioning for Improving Informed Consent: Patient and Physician Benefits. Reg Anesth Pain Med 41:65-8
Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel et al. (2016) Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychol Aging 31:380-9
Myerson, Joel; Spehar, Brent; Tye-Murray, Nancy et al. (2016) Cross-modal Informational Masking of Lipreading by Babble. Atten Percept Psychophys 78:346-54
Sommers, Mitchell S; Phelps, Damian (2016) Listening Effort in Younger and Older Adults: A Comparison of Auditory-Only and Auditory-Visual Presentations. Ear Hear 37 Suppl 1:62S-8S
Dey, Avanti; Sommers, Mitchell S (2015) Age-related differences in inhibitory control predict audiovisual speech perception. Psychol Aging 30:634-46
Peelle, Jonathan E; Sommers, Mitchell S (2015) Prediction and constraint in audiovisual speech perception. Cortex 68:169-81
Spehar, Brent; Goebel, Stacey; Tye-Murray, Nancy (2015) Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing. J Speech Lang Hear Res 58:1093-102
Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel et al. (2015) The self-advantage in visual speech processing enhances audiovisual speech recognition in noise. Psychon Bull Rev 22:1048-53
Tye-Murray, Nancy; Hale, Sandra; Spehar, Brent et al. (2014) Lipreading in school-age children: the roles of age, hearing status, and cognitive ability. J Speech Lang Hear Res 57:556-65
Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel et al. (2013) Reading your own lips: common-coding theory and visual speech perception. Psychon Bull Rev 20:115-9

Showing the most recent 10 out of 20 publications