Effective vocal communication requires the ability to detect and discriminate vocalizations from a background of distracting sounds. A prominent and effective distractor in human communication is the speech of others. However, even in a crowded room, a listener can focus on a single speaker and ignore the rest of the crowd. This ability to parse an auditory scene is severely impaired in patients with auditory processing disorder (APD), a central auditory system malady that affects as much as 5% of the population. Patients with APD are unable to attend to speech in mildly noisy auditory scenes even though their hearing ranges and thresholds are normal, suggesting that the disorder is one of central auditory circuits rather than of peripheral sensation. Although multiple behavioral therapies are used for treatment of the disorder with varying success, none are informed by an understanding of how neural circuits extract salient vocalizations from complex acoustic environments. The zebra finch is a well-studied animal model of vocal communication. Like humans, zebra finches naturally recognize and discriminate among vocalizations in noisy acoustic environments. The proposed experiments use behavioral, electrophysiological and computational techniques to investigate how vocalizations are encoded, decoded and filtered in the auditory system of the awake, behaving zebra finch.
The specific aims of these experiments are to study 1) whether perceptual priming influences birds'abilities to discriminate in auditory scenes;2) how well behaving animals and single neurons discriminate among vocalizations embedded in a distracting background;and 3) whether neural activity more closely tracks the sensation or the perception of degraded vocalizations.
By understanding how single neurons and neural circuits process communication sounds, we can begin to develop data-driven approaches for treating central auditory disorders such as APD. These experiments should yield informative principles for understanding how neural circuits encode complex sensory signals and how noisy sensory information is filtered to create a behaviorally meaningful neural representation of the sensory world.
|Schneider, David M; Woolley, Sarah M N (2013) Sparse and background-invariant coding of vocalizations in auditory scenes. Neuron 79:141-52|
|Lewi, Jeremy; Schneider, David M; Woolley, Sarah M N et al. (2011) Automating the design of informative sequences of sensory stimuli. J Comput Neurosci 30:181-200|
|Schumacher, Joseph W; Schneider, David M; Woolley, Sarah M N (2011) Anesthetic state modulates excitability but not spectral tuning or neural discrimination in single auditory midbrain neurons. J Neurophysiol 106:500-14|
|Ramirez, Alexandro D; Ahmadian, Yashar; Schumacher, Joseph et al. (2011) Incorporating naturalistic correlation structure improves spectrogram reconstruction from neuronal activity in the songbird auditory midbrain. J Neurosci 31:3828-42|
|Calabrese, Ana; Schumacher, Joseph W; Schneider, David M et al. (2011) A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds. PLoS One 6:e16104|
|Schneider, David M; Woolley, Sarah M N (2011) Extra-classical tuning predicts stimulus-dependent receptive fields in auditory neurons. J Neurosci 31:11867-78|
|Gess, Austen; Schneider, David M; Vyas, Akshat et al. (2011) Automated auditory recognition training and testing. Anim Behav 82:285-293|
|Schneider, David M; Woolley, Sarah M N (2010) Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain. J Neurophysiol 103:3248-65|