The proposed experiments define how cortical neurons support robust perception of complex sounds, such as speech and other communication sounds, in natural listening environments that include background noise. Signal in noise (SIN) processing has primarily been studied psychophysically;the neural mechanisms that support the remarkable tolerance to noise exhibited by normal hearing are not well understood. We focus on the encoding of low frequency envelope information because it is crucial for intelligible speech and improving speech intelligibility for the hearing impaired is an important clinical goal. More broadly, dynamic features of sound envelopes, such as common onsets, offsets, and modulation characteristics, drive auditory scene segmentation. These features are also particularly well represented in the response dynamics of cortical neurons. However, it has proven difficult to develop a general framework for understanding cortical envelope processing because the relationship between the stimulus envelope and the neural response pattern is typically both complex and substantially nonlinear. We hypothesize that the nonlinear dynamics of cortical responses endow them with a temporal precision that is essential to the robustness of SIN processing. To test this hypothesis, we will employ a novel nonlinear modeling framework to estimate spectrotemporal receptive fields (STRFs) of neurons recorded from the core auditory fields of awake behaving squirrel monkeys using 16- channel linear probes. We will evaluate the ability of nonlinear STRF models - including reduced (e.g., linear) and modified forms - to describe the dynamics of cortical responses to sounds with simple, parametrically varied envelopes (Aim 1). We will compare the performances of the models against real neurons in encoding complex vocalizations embedded in noise (Aim 2), and test candidate neural mechanisms for 'denoising'those signals in the context of optimal Bayesian population decoding methods. Finally, we will assess the effect of attentional filtering on SIN processing by recording from animals presented with identical complex stimuli while engaged in separate tasks, only one of which requires attention to detailed envelope features (i.e., modulation frequency change detection versus sound offset detection), while simultaneously deriving STRF models for subsequent comparison (Aim 3). These experiments will provide valuable insight into candidate neural mechanisms that support both bottom-up and top-down aspects of auditory scene segmentation, and support rigorous quantitative model-based approaches to characterizing laminar transformations in the cortical representation of complex sounds.

Public Health Relevance

The principal difficulty faced by people with peripheral hearing loss and even some language learning and reading impairments is a reduction in speech comprehension due to the competing background noise present in typical listening environments. This project explores the fundamental neural coding principles that enable robust speech comprehension in challenging listening environments. Knowledge of these principles will guide the development of novel therapeutic approaches to communicative disorders, such as algorithms for speech enhancement in hearing aids, and stimulation protocols for neural prosthetic devices for hearing.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
1R01DC011843-01A1
Application #
8297243
Study Section
Auditory System Study Section (AUD)
Program Officer
Platt, Christopher
Project Start
2012-04-01
Project End
2017-03-31
Budget Start
2012-04-01
Budget End
2013-03-31
Support Year
1
Fiscal Year
2012
Total Cost
$369,355
Indirect Cost
$119,355
Name
University of California San Francisco
Department
Otolaryngology
Type
Schools of Medicine
DUNS #
094878337
City
San Francisco
State
CA
Country
United States
Zip Code
94143
Hoglen, Nerissa E G; Larimer, Phillip; Phillips, Elizabeth A K et al. (2018) Amplitude modulation coding in awake mice and squirrel monkeys. J Neurophysiol 119:1753-1766
Malone, B J; Heiser, Marc A; Beitel, Ralph E et al. (2017) Background noise exerts diverse effects on the cortical encoding of foreground sounds. J Neurophysiol 118:1034-1054
Bigelow, James; Malone, Brian J (2017) Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates. PLoS One 12:e0183914
Teschner, Magnus J; Seybold, Bryan A; Malone, Brian J et al. (2016) Effects of Signal-to-Noise Ratio on Auditory Cortical Frequency Processing. J Neurosci 36:2743-56
Malone, Brian J; Scott, Brian H; Semple, Malcolm N (2015) Diverse cortical codes for scene segmentation in primate auditory cortex. J Neurophysiol 113:2934-52
Malone, Brian J; Beitel, Ralph E; Vollmer, Maike et al. (2015) Modulation-frequency-specific adaptation in awake auditory cortex. J Neurosci 35:5904-16
Schreiner, Christoph E; Malone, Brian J (2015) Representation of loudness in the auditory cortex. Handb Clin Neurol 129:73-84
Malone, Brian J; Scott, Brian H; Semple, Malcolm N (2014) Encoding frequency contrast in primate auditory cortex. J Neurophysiol 111:2244-63
Malone, Brian J; Beitel, Ralph E; Vollmer, Maike et al. (2013) Spectral context affects temporal processing in awake auditory cortex. J Neurosci 33:9431-50