Two cortical pathways originate from primary core areas of auditory cortex: a ventral pathway subserving identification of sounds, and a dorsal pathway, which was originally defined - similar to the visual system - as a processing stream for space and motion. We have recently proposed that this dorsal pathway should be redefined in a wider sense as a processing stream for sensorimotor integration and control (Rauschecker, 2011). This broader function explicitly includes spatial processing but also extends to the processing of temporal sequences, including spoken speech and musical melodies in humans. In this project, we will test the expanded model of the auditory dorsal stream by training rhesus monkeys to produce fixed sound sequences on a newly designed behavioral apparatus (monkey piano). By pressing a lever the monkey will produce a musical tone of a specific pitch; by pressing several levers in succession, the monkey will produce a melody. After a monkey has learned to reliably play the same melody, we will perform functional magnetic resonance imaging (fMRI) of auditory-responsive brain regions in the awake monkey while it listens to the learned self-generated sequence. Control stimuli include melodies the monkey has been passively exposed to by listening to another monkey play for the same amount of time, and novel melodies that the monkey never heard before. Preliminary data suggest that areas activated by the self-generated melody include a region in inferior parietal cortex as well as one focus each in dorsal and ventral premotor cortex. The locations of activated regions will guide subsequent electrophysiological recording with linear microelectrode arrays (LMAs). Each recording site will be tested with the same sequences. Next we will record neuronal responses in premotor cortex to passive listening of the sound sequences and compare them to neuronal activity obtained when the monkey actively produces the sequence with and without sound. Finally, we will add video of a monkey playing the sound sequence on the monkey piano and study multisensory interactions along the dorsal stream using fMRI and LMAs. In particular, responses in caudal auditory belt and parabelt will be compared with those in inferior parietal and premotor cortex in simultaneous recordings. Our studies, using alert monkeys trained in a behavioral task, will contribute to the understanding of unified principles of perception and cognition across sensory systems and their interactions with the motor system. Investigating the auditory dorsal stream in a nonhuman primate will provide valuable information about the evolution of speech and music in humans. Our studies are highly relevant for higher-order processing disorders of audition and speech, such as dysarthria, apraxia of speech, aphasia and specific language disorders which involve inadequate coordination between sensory and motor systems. The results will also improve our understanding of disorders of sensory-motor integration, such as ataxia, which may be caused by stroke or neurodegenerative disease, thus, leading to better therapies and rehabilitation strategies.
When we learn to sing or speak or play a musical instrument, our motor system has to produce the right sounds under auditory and visual guidance. To understand the neural mechanisms of this sensorimotor integration process, we will train rhesus monkeys in a novel behavioral task that allows them to produce sound sequences and will measure the monkey's brain activity with functional MRI and MRI-guided electrode recordings. Our studies will elucidate the evolution of speech and music and will help to develop a better understanding of higher-level disorders of auditory processing, speech and language, including dysarthria, apraxia of speech, and various forms of aphasia.