Auditory spectral integration has been reported, primarily in the speech perception literature, for at least 50 years. Under some conditions, spectral components falling within a critical region of approximately 5 ERBu (3.5 Bark) are processed by the central auditory system so that two or more resonance peaks are approximated by a single peak located at the spectral center-of-gravity, COG, of the original sound. These findings lead to a """"""""resolution vs. integration"""""""" paradox for the frequency domain analogous to the """"""""resolution vs. integration"""""""" paradox in the time domain (Viemeister, 1996). Recently, Lublinskaja (1996) demonstrated that changing the COG of a two-resonance signal over time leads listeners to hear a frequency transition that follows the dynamic COG. In our previous work, we developed a computational model and applied it to the COG effect for both static and dynamic signals. In the proposed work we will first better define the stimulus parameters that limit listener performance in spectral integration tasks and use our findings to revise and improve the model. To that end, we will assess the effect of uncertainty in signal parameters on performance for both static and dynamic complex sounds. In addition to listeners with normal hearing, we will test listeners with sensori-neural hearing loss due to outer hair cell dysfunction. These studies will help us better understand the spectral integration phenomenon. In addition to the aim of exploring the psychoacoustic aspects of the COG phenomenon, a second specific aim of this project is to address the function, limits and salience of COG effects (broadly defined) within the context of speech perception. This inquiry will take a different approach than that taken by the majority of studies in the literature in that we will not be strictly limiting our manipulations of the acoustic signal to formant frequency, formant amplitudes and/or individual harmonics. If this dynamic COG effect proves to be robust, it may be possible to incorporate it into novel signal processing schemes for cochlear implant users for whom dynamic sounds pose a challenge.
Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun (2011) Auditory spectral integration in the perception of static vowels. J Speech Lang Hear Res 54:1667-81 |
Patra, Harisadhan; Roup, Christina M; Feth, Lawrence L (2011) Masking of low-frequency signals by high-frequency, high-level narrow bands of noise. J Acoust Soc Am 129:876-87 |
Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun (2010) Auditory spectral integration in the perception of diphthongal vowels. J Acoust Soc Am 128:2070-4 |
Fox, Robert Allen; Jacewicz, Ewa (2010) Auditory spectral integration in nontraditional speech cues in diotic and dichotic listening. Percept Mot Skills 111:543-58 |
Hoglund, Evelyn M; Feth, Lawrence L (2009) Spectral and temporal integration of brief tones. J Acoust Soc Am 125:261-9 |
Fox, Robert Allen; Jacewicz, Ewa; Feth, Lawrence L (2008) Spectral integration of dynamic cues in the perception of syllable-initial stops. Phonetica 65:19-44 |