Current progress in the programming and customization of assistive listening devices is limited due to an inadequate understanding of the context-dependent weighting of acoustic cues, the interference of these cues under adverse listening conditions, and the processing limitations imposed on these cues by aging and hearing impairment. This proposal targets this gap by identifying how older normal-hearing and hearing-impaired listeners use auditory cues during aided presentations for understanding speech in noise. The primary objectives of this project are to identify (1) the auditory cues that maximize speech understanding under adverse listening conditions, (2) how older listeners perceptually weight those cues, and (3) what auditory properties of the masking speech limit speech understanding the most. The central aim is to examine and identify the most informative auditory cues when only partial speech information is available in noisy environments for older listeners when audibility is restored. The central hypothesis is that in quiet, temporal envelope cues contribute most to speech intelligibility and are available for older listeners with aided hearing. However, in noise and competing speech the contribution of these acoustic cues are more limited, with the contribution of other cues becoming more important, such as the temporal fine structure, the processing of which may be limited by age or cochlear pathology.
Specific Aim #1 addresses speech in continuous, interrupting, and speech-babble noise. Different listener groups enable the investigation of age, cochlear pathology, and amplification contributions to auditory perceptual weights. Correlational analysis will explore the relationship between perceptual weights, speech in noise performance, and cognitive abilities.
Specific Aim #2 investigates temporal properties of the target and competing speech and how they interact. These experiments explore how temporal properties of the target talker facilitate speech understanding and properties of the competing talker interfere. Informational masking is also explored via time-reversed competition. The long-term goal of this project is to define acoustic parameters for enhanced programming of assistive hearing technology and identify individual weighting strategies to assist in future customization of these devices to capitalize on existing capabilities of the device and the listener. The significant contribution of this project is in identifying the speech cues that will be most informative for these listeners in different noisy conditions. The approach of this project is innovative. It uses novel signal processing strategies to independently vary complex temporal properties of speech via noisy signal extraction. Furthermore, it extends the 'glimpsing'theory of automatic speech recognition to the human understanding of speech from partial information. These innovations allow for the direct investigation of auditory temporal cue use during a competing talker paradigm, quite arguably the most difficult listening condition for older listeners.

Public Health Relevance

The proposed research is relevant to public health because it identifies how older listeners use specific acoustic properties and defines the limit to which those properties are available to contribute to speech understanding during competing speech contexts. This research step is essential in the design of more cost effective hearing devices that improve speech understanding abilities of older listeners in adverse listening conditions. Thus, the proposed research is relevant to NIH's mission to develop fundamental knowledge that will reduce the burdens of human disability.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
1R03DC012506-01
Application #
8333541
Study Section
Special Emphasis Panel (ZDC1-SRB-Y (32))
Program Officer
Donahue, Amy
Project Start
2012-07-01
Project End
2015-06-30
Budget Start
2012-07-01
Budget End
2013-06-30
Support Year
1
Fiscal Year
2012
Total Cost
$106,050
Indirect Cost
$31,050
Name
University of South Carolina at Columbia
Department
Other Health Professions
Type
Schools of Public Health
DUNS #
041387846
City
Columbia
State
SC
Country
United States
Zip Code
29208
Gibbs 2nd, Bobby E; Fogerty, Daniel (2018) Explaining intelligibility in speech-modulated maskers using acoustic glimpse analysis. J Acoust Soc Am 143:EL449
Sharpe, Victoria; Fogerty, Daniel; den Ouden, Dirk-Bart (2017) The Role of Fundamental Frequency and Temporal Envelope in Processing Sentences with Temporary Syntactic Ambiguities. Lang Speech 60:399-426
Smith, Kimberly G; Fogerty, Daniel (2017) Speech recognition error patterns for steady-state noise and interrupted speech. J Acoust Soc Am 142:EL306
Fogerty, Daniel; Ahlstrom, Jayne B; Bologna, William J et al. (2016) Glimpsing Speech in the Presence of Nonsimultaneous Amplitude Modulations From a Competing Talker: Effect of Modulation Rate, Age, and Hearing Loss. J Speech Lang Hear Res 59:1198-1207
Fogerty, Daniel; Xu, Jiaqian; Gibbs 2nd, Bobby E (2016) Modulation masking and glimpsing of natural and vocoded speech during single-talker modulated noise: Effect of the modulation spectrum. J Acoust Soc Am 140:1800
Fogerty, Daniel; Xu, Jiaqian (2016) Speech recognition interference by the temporal and spectral properties of a single competing talker. J Acoust Soc Am 140:EL197
Smith, Kimberly G; Fogerty, Daniel (2016) Integration of partial information for spoken and written sentence recognition by older listeners. J Acoust Soc Am 139:EL240
Fogerty, Daniel (2015) Indexical properties influence time-varying amplitude and fundamental frequency contributions of vowels to sentence intelligibility. J Phon 52:89-104
Fogerty, Daniel; Entwistle, Jenine L (2015) Level considerations for chimeric processing: Temporal envelope and fine structure contributions to speech intelligibility. J Acoust Soc Am 138:EL459-64
Smith, Kimberly G; Fogerty, Daniel (2015) Integration of Partial Information Within and Across Modalities: Contributions to Spoken and Written Sentence Recognition. J Speech Lang Hear Res 58:1805-17

Showing the most recent 10 out of 14 publications