Throughout life, humans and other animals learn statistical regularities in the acoustic environment and adapt their hearing to emphasize the elements of sound that are important for behavioral decisions. Using these abilities, normal-hearing humans are able to perceive important sounds in crowded noisy environments and understand the speech of individuals the first time they meet. However, patients with peripheral hearing loss or central processing disorders often have problems hearing in these challenging settings, even when sound is amplified above perceptual threshold. This study seeks to characterize how two major areas in the brain's auditory network, auditory cortex and midbrain inferior colliculus, establish an interface between incoming auditory signals and the internal brain states that select information appropriate to the current behavioral context. Single-unit neural activity will be recorded from both of these brain areas in awake ferrets during the presentation of complex naturalistic sounds that mimic the acoustic environment encountered in the real world. Internal brain state will be controlled by selective attention to specific sound features in these complex stimuli. Changes in stimulus-evoked neural activity as attention shifts among sound features will be measured to identify interactions between internal state and incoming sensory signals in these different areas. Previous work has identified a large corticofugal projection from auditory cortex to inferior colliculus that could produce task-dependent changes in selectivity in inferior colliculus. This study will test the role of these corticofugal projections by optogenetic inactivation of auditory cortex during recordings from inferior colliculus. Selective inactivation of specific pathways will characterize how the network of brain areas works together to produce effective auditory behaviors. Computational modeling tools will be used to determine, from an algorithmic perspective, how neurons encode information about the natural stimuli and how this encoding changes as attention is shifted between features. Data collected during behavior will be used to develop models that combine bottom-up sensory processing and top-down behavioral control. This computational approach builds on classic characterizations of neural stimulus-response relationships using spectro-temporal receptive field models. New models will be developed that incorporate behavioral state variables and nonlinear biological circuit elements into established model frameworks. Together, these studies will provide new insight into the computational strategies used by the behaving brain to process complex sounds in real-world contexts.

Public Health Relevance

Patients with hearing impairments and central neurological disorders have difficulty processing and making accurate judgments about auditory stimuli, particular in challenging noisy environments. We seek to understand how the healthy brain learns to extract useful information from sounds in order to guide effective behavior. These experiments will provide novel insight into how the brain solves problems that can be used to develop new devices and treatments for these debilitating conditions.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC014950-04
Application #
9634046
Study Section
Auditory System Study Section (AUD)
Program Officer
Poremba, Amy
Project Start
2016-02-01
Project End
2021-01-31
Budget Start
2019-02-01
Budget End
2020-01-31
Support Year
4
Fiscal Year
2019
Total Cost
Indirect Cost
Name
Oregon Health and Science University
Department
Otolaryngology
Type
Schools of Medicine
DUNS #
096997515
City
Portland
State
OR
Country
United States
Zip Code
97239
Bagur, Sophie; Averseng, Martin; Elgueda, Diego et al. (2018) Go/No-Go task engagement enhances population representation of target stimuli in primary auditory cortex. Nat Commun 9:2529
Lu, Kai; Liu, Wanyi; Zan, Peng et al. (2018) Implicit Memory for Complex Sounds in Higher Auditory Cortex of the Ferret. J Neurosci 38:9955-9966
Schwartz, Zachary P; David, Stephen V (2018) Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex. Cereb Cortex 28:323-339
David, Stephen V (2018) Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 360:107-123
Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A et al. (2016) Encoding of natural sounds by variance of the cortical local field potential. J Neurophysiol 115:2389-98
Slee, Sean J; David, Stephen V (2015) Rapid Task-Related Plasticity of Spectrotemporal Receptive Fields in the Auditory Midbrain. J Neurosci 35:13090-102