Many deaf or severely hearing-impaired individuals can understand speech in quiet environments using a cochlear implant (CI), which stimulates the auditory nerve directly with electrical current. However, their speech understanding typically declines significantly in even small amounts of background noise. For those who have some residual low-frequency hearing, the combination of electric and acoustic stimulation (EAS) can significantly improve speech understanding in background noise. Fundamental frequency (F0) variation and low-frequency amplitude envelope of the target talker are important cues for EAS benefit. The broad long-term goals of the proposed research are to advance the understanding of how low-frequency acoustic stimulation combines with electric stimulation to enhance speech understanding in difficult listening situations, and to enhance EAS benefit for individuals who might otherwise receive limited or no benefit. One long-term goal is to develop a wearable real-time processor that can deliver low-frequency speech cues to CI users more effectively.
The specific aims are to (1) to increase the amount of EAS benefit to CI patients who already show a benefit;(2) To provide EAS benefit other CI patients who show little or no benefit typically;and (3) understand why some CI patients do not benefit from EAS, even when their audiometric results suggest they might. This work has the potential to extend the benefits of EAS to those CI users who do not possess enough residual hearing to show an EAS benefit typically, and to enhance the EAS benefit for those who do.

Public Health Relevance

Many deaf individuals can understand speech in quiet environments using a cochlear implant (CI), which stimulates the auditory nerve directly with electrical current, although their performance often declines dramatically in even small amounts of background noise. For those who have some remaining hearing in the low frequencies, however, the combination of electric and acoustic stimulation (EAS) can significantly improve speech understanding in noise. This proposal aims to continue our work developing novel speech processing schemes that provide EAS benefit to CI users who do not typically receive such a benefit, and enhance the EAS benefit to those who already receive some benefit.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
2R01DC008329-04A1
Application #
8182846
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
2006-07-01
Project End
2016-06-30
Budget Start
2011-07-01
Budget End
2012-06-30
Support Year
4
Fiscal Year
2011
Total Cost
$318,132
Indirect Cost
Name
Arizona State University-Tempe Campus
Department
Other Health Professions
Type
Schools of Allied Health Profes
DUNS #
943360412
City
Tempe
State
AZ
Country
United States
Zip Code
85287
Spencer, Nathaniel J; Tillery, Kate Helms; Brown, Christopher A (2018) The Effects of Dynamic-range Automatic Gain Control on Sentence Intelligibility With a Speech Masker in Simulated Cochlear Implant Listening. Ear Hear :
Dorman, Michael F; Natale, Sarah Cook; Butts, Austin M et al. (2017) The Sound Quality of Cochlear Implants: Studies With Single-sided Deaf Patients. Otol Neurotol 38:e268-e273
Brown, Christopher A; Helms Tillery, Kate; Apoux, Frédéric et al. (2016) Shifting Fundamental Frequency in Simulated Electric-Acoustic Listening: Effects of F0 Variation. Ear Hear 37:e18-25
Brown, Christopher A (2014) Binaural enhancement for bilateral cochlear implant users. Ear Hear 35:580-4
Dorman, Michael F; Loiselle, Louise; Stohl, Josh et al. (2014) Interaural level differences and sound source localization for bilateral cochlear implant patients. Ear Hear 35:633-40
Yost, William A; Brown, Christopher A (2013) Localizing the sources of two independent noises: role of time varying amplitude differences. J Acoust Soc Am 133:2301-13
Brown, Christopher A; Yost, William A (2013) Interaural time processing when stimulus bandwidth differs at the two ears. Adv Exp Med Biol 787:247-54
Yost, William A; Loiselle, Louise; Dorman, Michael et al. (2013) Sound source localization of filtered noises by listeners with normal hearing: a statistical analysis. J Acoust Soc Am 133:2876-82
Dorman, Michael F; Spahr, Anthony J; Loiselle, Louise et al. (2013) Localization and speech understanding by a patient with bilateral cochlear implants and bilateral hearing preservation. Ear Hear 34:245-8
Helms Tillery, Kate; Brown, Christopher A; Bacon, Sid P (2012) Comparing the effects of reverberation and of noise on speech recognition in simulated electric-acoustic listening. J Acoust Soc Am 131:416-23

Showing the most recent 10 out of 16 publications