The purpose of the proposed research is to investigate the organization of speech sound representation across the auditory cortex and the spatiotemporal interactions between cortical areas during speech perception. The brain's ability to parse meaning from incoming acoustic information arises from complex interactions within cortical networks, but how these networks are organized remains unclear. High-density intracranial electrocorticography (ECoG) will be used to record neural activity from speech-selective areas in the auditory cortex of awake, behaving human subjects. These subjects are patients with medication refractory epilepsy who have electrodes implanted based on clinical criteria. By combining multiple computational approaches this work will elucidate (1) the acoustic feature selectivity of single electrodes on the ECoG array, (2) the spatial organization of this feature selectivity, and (3) the spatiotemporal dynamics of neural population responses to speech sounds, with the goal of uncovering emergent properties within the population level analysis that cannot be recovered from single sites alone. Participants will listen passively to sentences spoken by a variety of speakers and of different genders while the ECoG signal is recorded. Additional sound stimuli will include environmental sounds and modulation limited noise ripples. Spectrotemporal receptive field models will be built from the neural responses to these sentences using maximally informative dimensions (MID) analysis and different parameterizations of the sound stimuli, including decomposition of the sounds into their corresponding spectrotemporal modulations and identification of the presence or absence of linguistic parameters such as manner of articulation. This will uncover the selectivity of each sit for low-level and high-level sound features as well as the organization of this selectivity across the anatomy of the cortical surface. Comparing this mapping for different types of sound stimuli will reveal whether organization differs for speech and non-speech sounds, or whether organization is based purely on acoustic properties of the sound signal. Vector autoregressive models and phase coupled oscillator models will be used to describe population-level interactions within areas of the auditory cortex. These interactions may manifest as speech-dependent cell assemblies across space that segregate processing for different phonetic features that cannot be uncovered at the single electrode level. Results from these experiments have implications for understanding neural coding as a whole, and will reveal the importance of population activity versus single electrode computations in speech processing. Elucidating how the brain represents low-level and high-level features important for speech is critical for understanding, diagnosing, and treating communication disorders including delayed language learning, aphasias, dyslexia, and autism. In addition, this work may help to improve brain machine interface design, including cochlear implants.
Human speech perception involves complex neural circuits that interact across space and time to convert acoustic information into syllables, words, and sentences. The proposed study will investigate how speech sounds are represented in the brain and how circuits in the brain interact during speech perception. Understanding these processes could lead to brain-based treatments of language disorders including delayed language learning, autism, aphasias, and dyslexia, and could help in the development of better brain machine interfaces including cochlear implants.