There is a fundamental gap in our understanding of the neural mechanisms of facial movements, both expressive and voluntary ones. The existence of this conceptual gap constitutes an important problem because, until it is filled, we will not be able to explain how the emotions are expressed nor how human speech is controlled, and the processes causing the many disorders of social communication remain elusive. The long-term goal is to understand how facial motor systems, cortical and sub-cortical ones, integrate sensory, emotional, and cognitive inputs and transform them into coherent motor acts. The overall objective of this proposal is the determination of the functional organization and the fundamental mechanisms by which a set of distributed cortical areas controls facial muscles to generate coherent facial movements. The experimental model system is a set of cortical areas with direct projections to the facial nucleus. It allows for testing of the central hypothesis that both emotional and voluntary facial movements are controlled through the coordinated activity of a network of cortical areas, each with a unique functional specialization. The rationale for this proposal is that completion of the research will uncover, for the first time, the neural mechanisms for facial movement control, imposing critical constraints on general motor control theory and the mechanisms and origins of speech. The central hypothesis will be tested through three specific aims:
Aim 1 will determine the functional organization of cortex for facial movement control. The working hypothesis that both emotional and voluntary movements are coded by both medial and lateral cortical face-motor areas will be tested using functional magnetic resonance imaging (fMRI) and electrophysiological recordings from fMRI-identified face-motor areas.
Aim 2 will determine the functional network structure of cortical face movement areas. The working hypothesis that cortical face-motor areas operate as a network will be tested through functional interaction analyses and joint electrical stimulation and electrophysiology.
Aim 3 will determine the principles of descending control of facial musculature. The working hypothesis that facial movements are coded through sequences of neural states, translated by muscle synergies into facial signals will be tested through joint electrical stimulation, electrophysiology and electromyography. The approach is innovative, because it brings a new model system to motor neuroscience, a new paradigm and multi- modal experimental approach to the study of the neural mechanisms, from the level of single cells to large-scale networks, of social communication, and because it challenges long-held views on the neural substrates of facial movement. The proposed research is significant, because it will provide a new dimension to motor control theory, it will define how emotions are translated into expressions and how social signals are generated, it will provide a new model system and foundational knowledge about the neural mechanisms of speech, thus posing important constraints for the understanding of the human condition. Results will improve the understanding and possibly treatment of neurological and psychiatric syndromes of motor function and social communication.
The proposed research is relevant to public health because the neural mechanisms that generate facial movements are essential to human emotional and social life, and are altered in many psychiatric disorders including bipolar mood disorder, schizophrenia, and the autism spectrum, as well as many neurological syndromes like volitional face paralysis or amimia. Studies proposed in this application will examine the neural mechanisms by which the brain controls facial movements. Thus the proposed research is relevant to the part of the NIH's mission that pertains to reducing illness and disability.