Everyday conversation typically involves both seeing and hearing our communication partners, and affords a vital means by which older adults stay engaged with their families and communities. Our previous research has contributed to fundamental knowledge of how aging affects both lipreading ability and spoken language comprehension. In this grant cycle, we will seek to explain why some individuals recognize much more of a talker's speech when it is presented audiovisually than would be predicted based on their vision-only and auditory-only performance, and based on this knowledge, we aim to develop interventions to enhance older adults'spoken language comprehension. The proposed studies are motivated by a model of audiovisual integration that takes into account both signal and lexical factors that are not incorporated into most existing models. Importantly, we will use an equation describing the summation of partial auditory and visual information to unconfound audiovisual integration and unimodal word recognition abilities. In order to accomplish our first Specific Aim, we will manipulate the properties of auditory and visual speech signals in order to localize the sources of age-related differences in everyday spoken language comprehension. We have developed matrix-style word recognition tests that will allow us to determine how age differences in processing speed and working memory affect AV integration at the level of phoneme and viseme activation. For our second Specific Aim, we have developed two new open-set word tests, one that systematically varies the number of activated word candidates and another that systematically varies word frequency. Using these tests, we will manipulate lexical properties affecting the perceptual confusability of different words and determine how cognitive factors relate to the activation levels of competing word candidates. For our final Specific Aim, we will use a test of audiovisual spoken language comprehension developed in our laboratory in our effort to identify variables that can predict age and individual differences in the benefits from slower speech. Our goal is to determine how slowed speech can best enhance older adults'comprehension of extended spoken passages and to predict who will benefit the most from such interventions. Accomplishment of these aims will have both theoretical and clinical significance, allowing us to assess a theoretical model of audiovisual speech processing and to test ways for enhancing spoken language comprehension in older adults. Overall, this project represents a unique opportunity that brings together an aural rehabilitation specialist, cognitive psychologists, and a clinical research audiologist for the purpose of developing aural rehabilitation procedures for older adults that are grounded in a well-tested theoretical model and in an understanding of how cognitive measures relate to audiovisual speech comprehension.
Speech recognition improves markedly when individuals can both see and hear a talker, compared with hearing alone. For older adults, the improvement often equals or exceeds that obtained from hearing aids and other listening devices. We propose an innovative theoretical model of audiovisual integration and will use it as a framework for establishing how aging affects audiovisual speech recognition, as well as to determine how best to enhance older adults'spoken language comprehension.
|Myerson, Joel; Spehar, Brent; Tye-Murray, Nancy et al. (2016) Cross-modal Informational Masking of Lipreading by Babble. Atten Percept Psychophys 78:346-54|
|Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel et al. (2016) Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychol Aging 31:380-9|
|Spehar, Brent; Goebel, Stacey; Tye-Murray, Nancy (2015) Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing. J Speech Lang Hear Res 58:1093-102|
|Peelle, Jonathan E; Sommers, Mitchell S (2015) Prediction and constraint in audiovisual speech perception. Cortex 68:169-81|
|Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel et al. (2015) The self-advantage in visual speech processing enhances audiovisual speech recognition in noise. Psychon Bull Rev 22:1048-53|
|Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel et al. (2013) Reading your own lips: common-coding theory and visual speech perception. Psychon Bull Rev 20:115-9|
|Feld, Julia; Sommers, Mitchell (2011) There Goes the Neighborhood: Lipreading and the Structure of the Mental Lexicon. Speech Commun 53:220-228|
|Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel et al. (2011) Cross-modal enhancement of speech detection in young and older adults: does signal content matter? Ear Hear 32:650-5|
|Strand, Julia F; Sommers, Mitchell S (2011) Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition. J Acoust Soc Am 130:1663-72|
|Tye-Murray, Nancy; Sommers, Mitchell; Spehar, Brent et al. (2010) Aging, audiovisual integration, and the principle of inverse effectiveness. Ear Hear 31:636-44|
Showing the most recent 10 out of 15 publications