Everyday conversation typically involves both seeing and hearing our communication partners, and affords a vital means by which older adults stay engaged with their families and communities. Our previous research has contributed to fundamental knowledge of how aging affects both lipreading ability and spoken language comprehension. In this grant cycle, we will seek to explain why some individuals recognize much more of a talker's speech when it is presented audiovisually than would be predicted based on their vision-only and auditory-only performance, and based on this knowledge, we aim to develop interventions to enhance older adults' spoken language comprehension. The proposed studies are motivated by a model of audiovisual integration that takes into account both signal and lexical factors that are not incorporated into most existing models. Importantly, we will use an equation describing the summation of partial auditory and visual information to unconfound audiovisual integration and unimodal word recognition abilities. In order to accomplish our first Specific Aim, we will manipulate the properties of auditory and visual speech signals in order to localize the sources of age-related differences in everyday spoken language comprehension. We have developed matrix-style word recognition tests that will allow us to determine how age differences in processing speed and working memory affect AV integration at the level of phoneme and viseme activation. For our second Specific Aim, we have developed two new open-set word tests, one that systematically varies the number of activated word candidates and another that systematically varies word frequency. Using these tests, we will manipulate lexical properties affecting the perceptual confusability of different words and determine how cognitive factors relate to the activation levels of competing word candidates. For our final Specific Aim, we will use a test of audiovisual spoken language comprehension developed in our laboratory in our effort to identify variables that can predict age and individual differences in the benefits from slower speech. Our goal is to determine how slowed speech can best enhance older adults' comprehension of extended spoken passages and to predict who will benefit the most from such interventions. Accomplishment of these aims will have both theoretical and clinical significance, allowing us to assess a theoretical model of audiovisual speech processing and to test ways for enhancing spoken language comprehension in older adults. Overall, this project represents a unique opportunity that brings together an aural rehabilitation specialist, cognitive psychologists, and a clinical research audiologist for the purpose of developing aural rehabilitation procedures for older adults that are grounded in a well-tested theoretical model and in an understanding of how cognitive measures relate to audiovisual speech comprehension.
Speech recognition improves markedly when individuals can both see and hear a talker, compared with hearing alone. For older adults, the improvement often equals or exceeds that obtained from hearing aids and other listening devices. We propose an innovative theoretical model of audiovisual integration and will use it as a framework for establishing how aging affects audiovisual speech recognition, as well as to determine how best to enhance older adults' spoken language comprehension.
Showing the most recent 10 out of 20 publications