When we speak, listeners hear us and understand us we speak correctly. But we also hear ourselves, and this auditory feedback affects our ongoing speech: delaying it causes dysfluency; perturbing its pitch or formants induces compensation. Yet we can also speak intelligibly even when we can't hear ourselves. For this reason, most models of speech motor control suppose that during speaking auditory processing is only engaged when auditory feedback is available. In this grant, we propose to investigate a computational model of speaking that represents a major departure from this. Our model proposes that the auditory system always plays a major role in controlling speaking, regardless of whether auditory feedback is available. In our state-feedback control (SFC) model of speech production, we posit two things about the role of the auditory system. First, the auditory system continuously maintains an estimate of current vocal output. This estimate is derived not only from available auditory feedback, but also from multiple other sources of information, including motor efference, other sensory modalities, and phonological and lexical context. Second, this estimate of current vocal output is used both at a low level to monitor and correct ongoing speech motor output and at a higher level to regulate the production of utterance sequences. By comparing computational simulations of our model with functional imaging experiments, we will test key predictions from our computational model as they apply to a wide range of speech production - from production of single utterances to utterance sequences.
The specific aims of this grant are (1) to demonstrate that auditory system continuously maintains an estimate of current vocal output, and (2) to determine how auditory feedback processing controls the production of utterance sequences. The proposed work not only addresses fundamentally important basic science questions about speech production, but also has broad clinical impact since abnormalities in auditory feedback processing are implicated in many speech impairments.
The importance of auditory feedback in speaking is underscored by the many diseases with speech disorders whose etiology have been wholly or partially ascribed to underlying deficits in auditory feedback processing, including autism, stuttering, apraxia of speech, spasmodic dysphonia, cerebellar ataxia schizophrenia, dementia, and Parkinson's disease. This project will lead to better understanding of the role of auditory feedback, which may lead to improved diagnosis and treatment for these speech impairments.
Cai, Chang; Sekihara, Kensuke; Nagarajan, Srikantan S (2018) Hierarchical multiscale Bayesian algorithm for robust MEG/EEG source reconstruction. Neuroimage 183:698-715 |
Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K et al. (2018) Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements. J Neural Eng 15:036026 |
Sekihara, Kensuke; Nagarajan, Srikantan S (2017) Subspace-based interference removal methods for a multichannel biomagnetic sensor array. J Neural Eng 14:051001 |