The basic mechanisms underlying comprehension of spoken language are unknown. We do not understand, for example, how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. An investigation of the cortical representation of speech sounds can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. However, how the cortex actually represents (i.e. encodes) phonemes is undetermined. The recording of neural activity directly from the cortical surface is a promising approach since it can provide both high spatial and temporal resolution. Here, I propose to examine the mechanisms of phonetic encoding by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local population encoding of speech sounds in the lateral temporal cortex as well as global processing across multiple sensory and cognitive areas.
The aim of this research is to reveal the fundamental mechanisms that underlie comprehension of spoken language. An understanding of how speech is coded in the brain has significant implications for the development of new diagnostic and rehabilitative strategies for language disorders (e.g. aphasia, dyslexia, autism, et alia). Abnormal perception of phonemes is a central component to language disability in all of these conditions.
Showing the most recent 10 out of 36 publications