From a public health perspective, understanding the processing of speech sounds is critical in effecting significant improvement in the lives of people with communication disorders. The neural code for speech over the range of sound and noise levels experienced daily is elusive due to strong nonlinearities of the inner ear and central auditory neurons. In collaboration with a phonetician, we are studying neural responses to acoustic parameters that are crucial to differentiating speech sounds. This proposal focuses on the neural coding of vowels in quiet and in noise. The rationale for focusing on vowels is their fundamental role in carrying information, especially in discourse, and their centrality in all know speech systems. We have developed a novel, testable hypothesis for the robust representation in the midbrain of two salient features of vowels: fundamental frequency (F0), or voice pitch, and formant frequencies, the spectral peaks that differentiate vowels. This hypothesis takes into account the facts that i) in addition to having a best frequency (BF), most midbrain neurons are tuned for periodicities in the range of voice pitch, and ii) the strength of the periodicities in te response of the periphery changes systematically depending upon the relation between BF and formant frequency. In particular, the rate fluctuations of auditory-nerve (AN) responses that are synchronized to the F0 of a vowel are weak for fibers tuned near formant frequencies and strong for fibers tuned between formants. This variation in the amplitude of low-frequency rate fluctuations across the AN is propagated to the midbrain, where neurons sensitive to modulation frequency have large rate changes depending on the relation between BF and vowel formant frequencies. The profile of rates across midbrain neurons encodes the formant frequencies of vowels and is robust across a wide range of sound levels and in the presence of noise. This code is appropriately vulnerable to changes in peripheral tuning, decreases in the strength of peripheral nonlinearities such as synchrony capture, and to changes in central inhibitory processing associated with aging. Our vowel-coding hypothesis will be tested by quantitatively relating behavioral thresholds for detection and discrimination of formants to physiological responses at the level of the midbrain. We will further develop our models for signal processing in the auditory midbrain to include a nonlinear feature of neural processing, mode-locking, that is observed in the midbrain. We hypothesize that mode-locking contributes to the representation of strongly periodic sounds, such as voiced speech, by boosting the response of neurons with band-pass modulation tuning to strongly modulated sounds. This work will lead to the development of improved signal-processing algorithms to assist the growing number of people who are afflicted with hearing loss. Because the representation proposed by our vowel-coding hypothesis is fundamentally different from classical models for neural representations of speech sounds, the signal-processing strategies to restore it will differ fundamentally from existing strategies.
The public-health significance of the proposed work is that it will improve our understanding of how a fundamental speech sound, the vowel, is coded by neurons in the auditory system. Using behavioral and physiological techniques we will test a novel hypothesis for robust neural coding of vowels by the healthy auditory system in quiet and noisy conditions, and using computational models we will then test the impact of hearing loss and aging on this neural coding. This work will lead to novel strategies that will preserve and enhance the representation of speech sounds using assistive devices such as hearing aids and auditory prostheses.
|Abdolrahmani Ø§, Mohammad; Doi, Takahiro; Shiozaki, Hiroshi M et al. (2016) Pooled, but not single-neuron, responses in macaque V4 represent a solution to the stereo correspondence problem. J Neurophysiol 115:1917-31|
|Henry, Kenneth S; Neilans, Erikson G; Abrams, Kristina S et al. (2016) Neural correlates of behavioral amplitude modulation sensitivity in the budgerigar midbrain. J Neurophysiol 115:1905-16|
|Carney, Laurel H; Li, Tianhao; McDonough, Joyce M (2015) Speech Coding in the Brain: Representation of Vowel Formants by Midbrain Neurons Tuned to Sound Fluctuations(1,2,3). eNeuro 2:|
|Zilany, Muhammad S A; Bruce, Ian C; Carney, Laurel H (2014) Updated parameters and expanded simulation options for a model of the auditory periphery. J Acoust Soc Am 135:283-6|
|Carney, Laurel H; Zilany, Muhammad S A; Huang, Nicholas J et al. (2014) Suboptimal use of neural information in a mammalian auditory system. J Neurosci 34:1306-13|
|Carney, Laurel H; Ketterer, Angela D; Abrams, Kristina S et al. (2013) Detection thresholds for amplitude modulations of tones in budgerigar, rabbit, and human. Adv Exp Med Biol 787:391-8|
|Schwarz, Douglas M; Zilany, Muhammad S A; Skevington, Melissa et al. (2012) Semi-supervised spike sorting using pattern matching and a scaled Mahalanobis distance metric. J Neurosci Methods 206:120-31|
|Carney, Laurel H; Sarkar, Srijata; Abrams, Kristina S et al. (2011) Sound-localization ability of the Mongolian gerbil (Meriones unguiculatus) in a task with a simplified response map. Hear Res 275:89-95|
|Wojtczak, Magdalena; Nelson, Paul C; Viemeister, Neal F et al. (2011) Forward masking in the amplitude-modulation domain for tone carriers: psychophysical results and physiological correlates. J Assoc Res Otolaryngol 12:361-73|
|Zilany, Muhammad S A; Carney, Laurel H (2010) Power-law dynamics in an auditory-nerve model can account for neural adaptation to sound-level statistics. J Neurosci 30:10380-90|
Showing the most recent 10 out of 29 publications