Human communication relies on spoken language, a critical ability that is disrupted in millions of patients with neuromuscular disorders, traumatic brain injury, stroke, or communication disorders. This project develops computational predictive models of how the human brain encodes speech, using direct brain recordings from neurosurgical patients to validate these models. The outcome of this project is immediately applicable to understanding communication disorders and to the development of neural prosthetic systems that aim to dramatically improve life for large patient populations with disordered speech function.
Defining the basic neural mechanisms supporting speech comprehension is a fundamental challenge for clinical insights into communication disorders such as aphasia or dyslexia. Better understanding of the neural basis of speech may also enable development of neural prosthetic devices for disabling neurological disorders affecting communication ability (e.g., ALS or stroke). Humans fluidly understand speech despite large variations in speakers and environmental conditions, such as speaker identity, rate, and background noise, but the underlying neural representations that support this invariant speech recognition ability are unknown. We will establish and test computational models of the invariant neural representations used by the human auditory system to achieve reliable speech comprehension. An important application of this knowledge is the development of robust neural decoding algorithms that can be used in neural prosthetic systems designed to restore conversational speech to individuals with disabling language disorders. To achieve these goals, this project will investigate intracranial responses to speech measured with high density electrode arrays implanted in the auditory cortex of patients undergoing neurosurgical procedures. The first aim will investigate neural encoding of invariant speech representations believed to comprise key organizational units of spoken language, including phonetic and syllable structure. The second aim develops a novel machine learning framework to build decoding models directly from neural data recorded during intended silent (imagined) speech. This research program will provide new quantitative tools to understand neural mechanisms of speech comprehension and imagery in the human brain with a goal to advance application of these findings toward development of neural interfaces to restore natural speech.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.