The benefit of combined electric and acoustic stimulation (EAS) for speech and pitch perception has been demonstrated in a number of previous studies. In some cases, EAS benefit has been documented even when cochlear-implant (CI) patients have very limited residual hearing and speech perception ability in the non-implanted ear. To date, it is still unclear how individual differences in sensory inputs, linguisti context, and cognitive functions influence the degree of benefit provided by EAS, and it is not known whether the typical EAS patient utilizes their residual hearing to its greatest potential. These uncertainties limit clinicians'and patients'ability to make good decisions related to second-ear implantation. In this research, we seek to identify factors that underlie EAS benefit and to investigate methods that could potentially enhance the benefits of residual hearing in EAS users. Unlike the descriptive approach employed by most previous studies, we will take a more comprehensive, model-based approach that considers both the bottom-up and top-down processes that contribute to multi-source speech perception in EAS users.
Aim 1 will determine how EAS benefit is influenced by listeners'ability to utilize and optimally weight speech cues presented to the CI and residual hearing ears.
Aim 2 will investigate how bottom-up low-frequency acoustic cues and top-down processing (such as the use of linguistic context and the ability to fill in missing speech information) interact to improve speech intelligibility in EAS usrs. Finally, Aim 3 will develop and test speech-enhancement algorithms that are likely to improve speech perception by EAS users. Overall, this research should add substantially to our understanding of 1) the degree of benefit that can be expected from low-frequency residual hearing in EAS, 2) the mechanisms responsible for EAS benefit and the factors that account for its variability across individuals, and 3) the nature of signal-processing algorithms that may enhance speech perception in EAS users.

Public Health Relevance

The purpose of this study is to identify factors that underlie benefit of combined electric and acoustic stimulation in cochlear-implant users and to investigate methods that could enhance speech recognition performance in these users. Findings should lead to improved speech perception in unilateral cochlear-implant users with residual low-frequency hearing, and allow individual patients and their clinicians to make better- informed decisions regarding second-ear implantation.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC012300-02
Application #
8605184
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
2013-01-15
Project End
2017-12-31
Budget Start
2014-01-01
Budget End
2014-12-31
Support Year
2
Fiscal Year
2014
Total Cost
$291,826
Indirect Cost
$59,940
Name
Northeastern University
Department
Other Health Professions
Type
Schools of Allied Health Profes
DUNS #
001423631
City
Boston
State
MA
Country
United States
Zip Code
02115
Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee et al. (2018) Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants. Ear Hear 39:101-109
Kong, Ying-Yee; Jesse, Alexandra (2017) Low-frequency fine-structure cues allow for the online use of lexical stress during spoken-word recognition in spectrally degraded speech. J Acoust Soc Am 141:373
Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee (2017) English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition. J Speech Lang Hear Res 60:190-198
Kong, Ying-Yee; Winn, Matthew B; Poellmann, Katja et al. (2016) Discriminability and Perceptual Saliency of Temporal and Spectral Cues for Final Fricative Consonant Voicing in Simulated Cochlear-Implant and Bimodal Hearing. Trends Hear 20:
Oh, Soo Hee; Donaldson, Gail S; Kong, Ying-Yee (2016) The role of continuous low-frequency harmonicity cues for interrupted speech perception in bimodal hearing. J Acoust Soc Am 139:1747
Oh, Soo Hee; Donaldson, Gail S; Kong, Ying-Yee (2016) Top-Down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech. Ear Hear 37:582-92
Kong, Ying-Yee; Donaldson, Gail; Somarowthu, Ala (2015) Effects of contextual cues on speech recognition in simulated electric-acoustic stimulation. J Acoust Soc Am 137:2846-57
Kong, Ying-Yee; Somarowthu, Ala; Ding, Nai (2015) Effects of Spectral Degradation on Attentional Modulation of Cortical Auditory Responses to Continuous Speech. J Assoc Res Otolaryngol 16:783-96
Donaldson, Gail S; Rogers, Catherine L; Johnson, Lindsay B et al. (2015) Vowel identification by cochlear implant users: Contributions of duration cues and dynamic spectral cues. J Acoust Soc Am 138:65-73
Huang, Huang; Lee, Tan; Kleijn, W Bastiaan et al. (2015) A Method of Speech Periodicity Enhancement Using Transform-domain Signal Decomposition. Speech Commun 67:102-112

Showing the most recent 10 out of 13 publications