Understanding the physical, computational, and theoretical bases of human vocal communication, speech, is crucial to improved comprehension of voice, speech and language diseases and disorders, and improving their diagnosis, treatment and prevention. Meeting this challenge requires knowledge of the neural and sensorimotor mechanisms of vocal motor control. Our project will directly investigate the neural and sensorimotor mechanisms involved in the production of complex, natural, vocal communication signals. Our results will directly enhance brain-computer interface technology for communication and will accelerate the development of prostheses and other assistive technologies for individuals with communications deficits due to injury or disease. We will develop a vocal prosthetic that directly translates neural signals in cortical sensorimotor and vocal-motor control regions into vocal communication signals output in real-time. Building on success using non-human primates for brain computer interfaces for general motor control, the prosthetic will be developed in songbirds, whose acoustically rich, learned vocalizations share many features with human speech. Because the songbird vocal apparatus is functipnally and anatomically similar to the human larynx, and the cortical regions that control it are closely analogous to speech motor-control areas of the human brain, songbirds offer an ideal model for the proposed studies. Beyond the application of our work to human voice and speech, development of the vocal prosthetic will enable novel speech-relevant studies in the songbird model that can reveal fundamental mechanisms of vocal learning and production. In the first stage of the project, we collect a large data set of simultaneously recorded neural activity and vocalizations. In stage two, we will apply machine learning and artificial intelligence techniques to develop algorithms that map neural recordings to vocal output and enable us to estimate intended vocalizations directly from neural data. In stage three, we will develop computing infrastructure to run these algorithms in real-time, predicting intended vocalizations from neural activity as the animal is actively producing these vocalizations. In stage four, we will test the effectiveness of the prosthetic by substituting the bird's own vocalization with the output from our prosthetic system. Success will set the stage for testing of these technologies in humans and translation to multiple assistive devices. In addition to our research goals, the project will engage graduate, undergraduate, and high school students through the development of novel educational modules that introduce students to brain machine interface and multidisciplinary studies that span engineering and the basic sciences.
Developing a vocal prosthesis will directly enhance brain-computer interface technology for communication and accelerate the realization of prostheses and other assistive technologies for individuals with communications deficits due to injury or disease. The basic knowledge of the neural and sensorimotor mechanisms of vocal motor control acquired will impact understanding of multiple voice, speech, and language diseases and disorders. The techniques developed will enabling novel future studies of vocal production and development.