The benefit of combined electric and acoustic stimulation in speech and pitch perception has been demonstrated in a number of studies. A close examination of the amount of combined benefit reported varies across studies and across test materials and conditions. Some patients did not demonstrate any combined benefit and in rare occasions, some even exhibited a potential incompatibility between the two types of stimulation. Several attempts have been made to relate combined acoustic and electric hearing benefits to some measure of auditory function in the residual hearing, but significant correlations were not found. This study focuses on bimodal hearing in which listeners receive electric stimulation in one ear and acoustic stimulation in the contralateral ear. The long-term goals of this project are (1) to understand the processing of speech in CI listeners who receive combined acoustic and electric stimulation, and (2) to provide a basis for the development of rehabilitation strategies for improving speech recognition in CI listeners.
The specific aims are (1) to identify the speech information extracted in electric hearing in the high-frequency regions and residual acoustic hearing in the low-frequency regions;(2) to investigate how the extracted information from each ear is integrated in normal-hearing and cochlear-implant listeners;and (3) to relate phoneme recognition performance to sentence recognition performance. We will apply several well-developed speech integration models, including a simple probabilistic model, Fuzzy Logic Model of Perception, and Pre-Labeling and Post- Labeling models to predict intelligibility scores for combined hearing performance. This model-based approach provides the means to systematically study the differences in the abilities of cochlear-implant listeners to simultaneously extract speech information from acoustic and electric stimulation and integrate this information across ears. The proposed work is of high clinical relevance because it may help identify deficits in information extraction and/or integration, encountered by implant users on an individual basis and aid in developing rehabilitative strategies tailored to individual needs. Relevance: The purpose of this study is to investigate how speech information is integrated across ears in individuals who wear a cochlear implant in one ear and a hearing aid in the opposite ear. The proposed work is of high clinical relevance because it may help identify the problems encountered by implant users and aid in developing rehabilitative strategies tailored to individual needs.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
5R03DC009684-03
Application #
7826670
Study Section
Special Emphasis Panel (ZDC1-SRB-C (25))
Program Officer
Donahue, Amy
Project Start
2008-06-13
Project End
2012-05-31
Budget Start
2010-06-01
Budget End
2012-05-31
Support Year
3
Fiscal Year
2010
Total Cost
$154,440
Indirect Cost
Name
Northeastern University
Department
Other Health Professions
Type
Schools of Allied Health Profes
DUNS #
001423631
City
Boston
State
MA
Country
United States
Zip Code
02115
Kong, Ying-Yee; Mullangi, Ala (2013) Using a vocoder-based frequency-lowering method and spectral enhancement to improve place-of-articulation perception for hearing-impaired listeners. Ear Hear 34:300-12
Kong, Ying-Yee; Mullangi, Ala; Marozeau, Jeremy (2012) Timbre and speech perception in bimodal and bilateral cochlear-implant listeners. Ear Hear 33:645-59
Kong, Ying-Yee; Mullangi, Ala (2012) On the development of a frequency-lowering system that enhances place-of-articulation perception. Speech Commun 54:147-160
Kong, Ying-Yee; Braida, Louis D (2011) Cross-frequency integration for consonant and vowel identification in bimodal hearing. J Speech Lang Hear Res 54:959-80
Kong, Ying-Yee; Mullangi, Ala; Marozeau, Jeremy et al. (2011) Temporal and spectral cues for musical timbre perception in electric hearing. J Speech Lang Hear Res 54:981-94