Defining the basic neural mechanisms supporting speech and language function is a fundamental challenge for clinical insights into communication disorders such as aphasia or dyslexia. Better understanding of the neural basis of speech may also lead to development of neural prosthetic devices for disabling neurological disorders affecting communication ability (e.g., ALS or stroke). To achieve these goals, two key challenges must be addressed. First, how is speech sounds represented by cortical activity? Humans fluidly understand speech despite large variations in speakers and environmental conditions, but the underlying neural representations that support this invariant recognition ability are fundamentally unknown. Second, what is the anatomical substrate of cortical speech representation? Theoretical models and nonhuman animal data suggest auditory object recognition may be organized hierarchically, but it remains unknown if this architecture is present in the human brain. This project will focus on these two key questions, neural representation and connectivity, by investigating intracranial (electrocorticographic, ECoG) responses to speech measured with microelectrode arrays in neurosurgical patients. During the mentored K99 phase, the first specific aim will investigate the neural representation of speech in higher order auditory cortex using a neural encoding model approach. A neural encoding model describes quantitatively what speech features are encoded by specific brain areas and predicts the neural response to novel speech stimuli. The presence of a categorical phonetic speech representation will be tested by comparing a phonetic model, which represents acoustic invariance among phone-level categories, to a linear spectrotemporal model based on the speech spectrogram. During the independent R00 phase, the second specific aim will investigate the connectivity of neural circuits supporting speech perception using electrical cortical stimulation (ECS) and functional connectivity analysis. ECS directly maps anatomical connectivity of stimulated brain sites, while functional connectivity analysis identifies how these connections are modulated during speech perception. Comparison of the connectivity maps to the feature selectivity of fitted encoding models (Aim 1) will provide a comprehensive view of the functional organization of speech representation in higher order auditory cortex. Understanding the representation and connectivity of speech recognition in human cortex has significant implications for a number of health applications. Accurate encoding models can be used to decode speech from neural activity and form the basis of prosthetic devices for communication. Furthermore, mapping the functional organization of speech will allow more precise determination of critical speech sites during neurosurgical procedures and will provide insights into key brain areas and circuits underlying communication disorders such as aphasia and developmental dyslexia.
Human communication relies on spoken language, a critical ability that is disrupted in millions of patients with neuromuscular disorders, traumatic brain injury, stroke, or communication disorders. This project develops computational predictive models of how the human brain encodes speech, using direct brain recordings from neurosurgical patients to validate these models. The outcome of this project is immediately applicable to understanding communication disorders and to the development of neural prosthetic systems that aim to dramatically improve life for large patient populations with disordered speech function.