The mapping from sound to meaning that is fundamental to the comprehension of spoken language involves a series of neural computations comprising two basic cognitive subroutines: perceptual processing, which involves the low-level auditory and phonetic analysis of a speech signal, and lexical access, whereby the semantic and other properties associated with a word's sound form are retrieved. While past work has investigated the neuroanatomical bases for perceptual processing and lexical access, methodological limitations have prevented researchers from probing the neurophysiological dynamics that implement these computations on a fine-grained spatial and temporal scale. Recent advances in the acquisition and analysis of high-density intracranial electrocorticography (ECoG) data directly from the brains of awake, behaving humans undergoing brain surgery have elucidated the cortical representations of speech sounds and the dynamics of perceptual processing, but it is not yet known how the brain implements lexical access. In the proposed work, we will investigate the neurophysiology of lexical access and the interactions between sound and meaning by recording ECoG responses to spoken words while patients with implanted electrode grids participate in two psycholinguistic experiments (Aim 1), and by developing a computational model that links internal model predictions to spatiotemporally distributed processing components in neural responses to speech (Aim 2).
Aims 1. 1 and 1.2 will examine the influence of different contextual biases on the neurophysiological response to a spoken word.
In Aim 1. 1, we hypothesize that both acoustic and contextual cues will modulate lexical access, and that the neural signature of lexical access will emerge in the superior temporal gyrus (STG). Importantly, any consistent differences in ECoG responses to an acoustically identical stimulus in different contexts will correspond to neural processing components associated with lexical access (and not perceptual processing).
In Aim 1. 2, we hypothesize that top-down contextual biases will modulate perceptual processing of a subsequent related spoken word, consistent with highly interactive perceptual and lexical processing dynamics in STG. Finally, in Aim 2, we will develop a computational model of speech perception capable of predicting time-varying neural response patterns. The specification of such a model will formalize a theoretical framework within which to consider hypotheses about the neural implementations of perceptual processing and lexical access. It will also generate novel, testable predictions that will guide future experimental work with ECoG involving speech perception. Understanding the neural circuitry and computations that s upport perceptual processing and lexical access has the potential to improve diagnosis and characterization of speech and language disorders (e.g., dyslexia, aphasia), lead to innovative, brain-based treatments for such conditions, and aid in the development of assistive technologies capable of extracting meaning from speech.
The proposed research seeks to characterize how the brain extracts meaning from speech and how the processing of meaning interacts with ongoing speech perception. Understanding these processes could lead to brain-based treatments for clinical disorders that disrupt speech perception and language comprehension, such as dyslexia, specific language impairment, and aphasia.
Muller, Leah; Rolston, John D; Fox, Neal P et al. (2018) Direct electrical stimulation of human cortex evokes high gamma activity that predicts conscious somatosensory perception. J Neural Eng 15:026015 |
Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E (2018) Speaker information affects false recognition of unstudied lexical-semantic associates. Atten Percept Psychophys 80:894-912 |