Audiovisual (AV) speech stimuli are typically far more intelligible than auditory-only (AO) stimuli under noisy conditions and/or when the listener has a hearing loss. However, individual differences in the ability to glean speech information from seeing a talker (i.e., lipreading, visual-only (VO) speech perception) inevitably set limits on the contribution of the visual stimulus to AV intelligibility. During adult aging, difficulties perceiving acoustic speech in noise are common, and individual differences in lipreading ability set limits on the benefits of AV speech despite AV integration abilities that are typically preserved in healthy aging. Research on training to improve auditory speech perception in noise has produced modest outcomes to date. This project is based on the rationale that lipreading training can be used to improve visual speech perception and in so doing also improve AV speech recognition. Theoretically, what needs to be improved is both VO phoneme category perception and VO word recognition, which involves integration of phonemic information. Both of these may be difficult to achieve in the context of the adult perceiver's more dominant, expert auditory speech processing system. Taking this into account, along with emerging knowledge and theory about visual perceptual learning (PL), in addition to our results from previous lipreading training studies, a series of experiments is proposed on visual speech PL. Training experiments will manipulate factors designed (1) to reduce reliance on auditory phonological representations initiated through print feedback, and (2) to promote phoneme category learning and lexical-level phoneme information integration. Participants will be normal-hearing (NH) younger adults and older adults who do not qualify for hearing aids but have speech perception in noise difficulties with/without pure tone threshold elevation.
Aim 1 will investigate learning with nonsense word stimuli in a match-to-sample training task designed to investigate how different types of printed information affect learning.
Aim 2 will investigate a novel incidental learning paradigm for phoneme category learning, and compare it to results with conventional phoneme identification training.
Aim 3 will investigate feedback contingency levels that are associated with three different feedback methods during real word training. Feedback contingency approaches are designed to investigate how subjective confidence and knowledge of performance accuracy affect PL. Every participant will undergo screening, and pre- and post-training tests that include VO words and sentences, and AV and AO words and sentences in speech shaped noise in order to evaluate the extent to which VO training generalizes to VA speech perception in noise. Age and individual differences will be investigated in statistical modeling approaches.

Public Health Relevance

The ability to perceive visual speech during noisy face-to-face communication can be highly effective for overcoming hearing difficulties, but adults who have relied on auditory speech throughout their lives are typically unable to take full advantage of the information in visual speech stimuli. Building on recent advances in research on perceptual learning, this project aims to develop effective, practical training paradigms for visual speech perception in older and younger adults.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
High Priority, Short Term Project Award (R56)
Project #
1R56DC016107-01A1
Application #
9747398
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
2018-09-01
Project End
2019-08-31
Budget Start
2018-09-01
Budget End
2019-08-31
Support Year
1
Fiscal Year
2018
Total Cost
Indirect Cost
Name
George Washington University
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
043990498
City
Washington
State
DC
Country
United States
Zip Code
20052