The ability to recognize complex patterns in nature is typically effortless for the human brain. For example, healthy humans can easily recognize faces, complex odor mixtures and tastes, and words and sentences, even when the patterns are corrupted by noise or occur in different contexts. Pattern recognition is not only essential for communication and interacting with the environment, it is also key to memory formation. However, the underlying mechanisms involved remain mysterious. We do not yet have a complete solution for how any brain (of any model system, large or small) solves this problem, and programming a computer to accomplish the feats of pattern recognition that humans are capable of is still an active area of research. This presents a major roadblock towards treating the large number of individuals with various pattern recognition deficits (e.g., patients suffering from central auditory processing disorder, visual agnosia, autism spectrum disorder, various neurodegenerative diseases, or a recent stroke). Here we propose to find a solution to this problem in a brain capable of pattern recognition, but with orders of magnitude fewer neurons than most mammalian brains. My lab has recently demonstrated, using quantitative behavioral assays, computational modeling, and neural circuit manipulations, that flies can both produce and detect dynamic acoustic patterns that vary over multiple timescales. Moreover, we have uniquely pioneered methods to functionally characterize neurons of the acoustic communication system of Drosophila, from sensory inputs all the way to motor outputs. Building on these achievements, we now propose a strategy for recording from the complete set of input and output neurons of the network(s) underlying acoustic pattern recognition in this model system, and for mapping the underlying connections. To do this, we focus on testing two prominent hypotheses (posited across model systems) for how the brain accomplishes song pattern recognition. The first experiments test the hypothesis that a precise balance of excitation and inhibition within the auditory pathway, ultimately generating sparse and selective responses, is required for temporal feature selectivity and song pattern recognition. The second experiments test the hypothesis that song pattern recognition relies on template matching, or a neural network that compares the incoming auditory signal to an internal representation of a particular pattern. The ultimate goal of this line of research is to inspire the design of simple (based on few neurons) neural prosthetic devices to restore or supplement brain function lost during disease or injury. Because patterns in fly song and human speech vary over similar timescales, neural computations for recognizing song patterns in Drosophila should be informative for solving pattern recognition in more complex systems. More broadly, our results will contribute to a deeper understanding of how nervous systems process auditory and species-specific information, and have the potential to transform our understanding of how nervous systems produce sensory- driven behaviors.

Public Health Relevance

The human brain is excellent at recognizing complex patterns in nature, such as faces, odor mixtures, or words and phrases. A major roadblock towards treating the large number of individuals with pattern recognition deficits (e.g., in autism spectrum disorder or various neurodegenerative diseases) is the lack of a complete neural solution (in any model system) to this problem. Using a suite of new methods my lab has pioneered to study acoustic communication in Drosophila, we offer a comprehensive strategy for resolving how the fly brain accomplishes acoustic pattern recognition on timescales similar to human perception;such a solution should therefore inspire the design of simple neural prosthetic devices for pattern recognition.

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
NIH Director’s New Innovator Awards (DP2)
Project #
1DP2NS092378-01
Application #
8755764
Study Section
Special Emphasis Panel (ZRG1-MOSS-C (56))
Program Officer
Talley, Edmund M
Project Start
2014-09-30
Project End
2019-08-31
Budget Start
2014-09-30
Budget End
2019-08-31
Support Year
1
Fiscal Year
2014
Total Cost
$2,430,000
Indirect Cost
$930,000
Name
Princeton University
Department
Biochemistry
Type
Schools of Arts and Sciences
DUNS #
002484665
City
Princeton
State
NJ
Country
United States
Zip Code
08543
Clemens, Jan; Coen, Philip; Roemschied, Frederic A et al. (2018) Discovery of a New Song Mode in Drosophila Reveals Hidden Structure in the Sensory and Neural Drivers of Behavior. Curr Biol 28:2400-2412.e6
Clemens, Jan; Ozeri-Engelhard, Nofar; Murthy, Mala (2018) Fast intensity adaptation enhances the encoding of sound in Drosophila. Nat Commun 9:134
Calhoun, Adam J; Murthy, Mala (2017) Quantifying behavior to solve sensorimotor transformations: advances from worms and flies. Curr Opin Neurobiol 46:90-98
Stern, David L; Clemens, Jan; Coen, Philip et al. (2017) Experimental and statistical reevaluation provides no evidence for Drosophila courtship song rhythms. Proc Natl Acad Sci U S A 114:9978-9983
Crocker, Amanda; Guan, Xiao-Juan; Murphy, Coleen T et al. (2016) Cell-Type-Specific Transcriptome Analysis in the Drosophila Mushroom Body Reveals Memory-Related Changes in Gene Expression. Cell Rep 15:1580-1596
Coen, Philip; Xie, Marjorie; Clemens, Jan et al. (2016) Sensorimotor Transformations Underlying Variability in Song Intensity during Drosophila Courtship. Neuron 89:629-44
Clemens, Jan; Girardin, Cyrille C; Coen, Pip et al. (2015) Connecting Neural Codes with Behavior in the Auditory System of Drosophila. Neuron 87:1332-1343