Brain-computer interfaces (BCIs) are increasingly being used in both scientific research and therapeutic intervention applications for severely disabled patients. Typical usage of such devices provides continuous control over computer cursors and/or selection of discrete items using brain activity recorded from the scalp (electroencephalography), on the brain surface (electrocorticography), or inside the brain (intracortical microelectrode recording). As therapeutic devices, BCIs have been very successful in restoring communication abilities to paralyzed and mute patients, most often by discrete-choice letter spelling. In one study, real-time control of a speech synthesizer was employed for communication by BCI. However, this study relied on invasive neurological recordings, which is impractical for widespread speech therapeutic application. The proposed project addresses this knowledge gap by investigating performance of a BCI designed to map non- invasive EEG sensorimotor rhythms into a low-dimensional speech representation, specifically formant frequencies, for control of a continuous vowel synthesizer with instantaneous auditory and visual feedback. A prototype device has been developed and used by a pilot subject to control a two dimensional formant frequency vowel synthesizer. This device will be used in a human subjects study for evaluation of BCI performance in vowel production tasks to address two major research topics: [1] the effect of feedback modality (auditory vs. visual) on learning BCI control and [2] investigating long-term retention of performance using the BCI device. The goal of this research study is to illustrate the feasibility of meaningful auditory feedback (i.e. vowel sounds) as an appropriate feedback mechanism for a speech BCI. This technology will ultimately benefit patients with severe communication impairment, especially for those whom invasive BCIs are not viable options. The results of this research will also provide a basis for future BCI designs using low- dimensional speech synthesizers for continuous production of both consonants and vowels using non-invasive EEG. The long-term research goal, of providing continuous artificial speech (both vowels and consonants) synthesis in real-time for restoration of communication, will be a significant advance in our ability to improve the patients'quality of life and allow them to better engage in social interactions.

Public Health Relevance

The proposed research will improve our understanding of the feedback control mechanisms required for optimal communication using a brain-computer interface. The outcome of this research has the potential to improve the quality of life for mute, paralyzed patients, or others lacking the ability to speak and help inform development of improved brain-computer interface applications for communication.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
7R03DC011304-02
Application #
8336863
Study Section
Special Emphasis Panel (ZDC1-SRB-Y (56))
Program Officer
Miller, Roger
Project Start
2011-09-21
Project End
2014-08-31
Budget Start
2012-09-01
Budget End
2013-08-31
Support Year
2
Fiscal Year
2012
Total Cost
$141,239
Indirect Cost
$41,239
Name
University of Kansas Lawrence
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
076248616
City
Lawrence
State
KS
Country
United States
Zip Code
66045
Brumberg, Jonathan S; Pitt, Kevin M; Mantie-Kozlowski, Alana et al. (2018) Brain-Computer Interfaces for Augmentative and Alternative Communication: A Tutorial. Am J Speech Lang Pathol 27:1-12
Brumberg, Jonathan S; Pitt, Kevin M; Burnison, Jeremy D (2018) A Noninvasive Brain-Computer Interface for Real-Time Speech Synthesis: The Importance of Multimodal Feedback. IEEE Trans Neural Syst Rehabil Eng 26:874-881
Pitt, Kevin M; Brumberg, Jonathan S (2018) Guidelines for Feature Matching Assessment of Brain-Computer Interfaces for Augmentative and Alternative Communication. Am J Speech Lang Pathol 27:950-964
Brumberg, Jonathan S; Nguyen, Anh; Pitt, Kevin M et al. (2018) Examining sensory ability, feature matching and assessment-based adaptation for a brain-computer interface using the steady-state visually evoked potential. Disabil Rehabil Assist Technol :1-9
Brumberg, Jonathan S; Krusienski, Dean J; Chakrabarti, Shreya et al. (2016) Spatio-Temporal Progression of Cortical Activity Related to Continuous Overt and Covert Speech Production in a Reading Task. PLoS One 11:e0166872
Lotte, Fabien; Brumberg, Jonathan S; Brunner, Peter et al. (2015) Electrocorticographic representations of segmental features in continuous speech. Front Hum Neurosci 9:97
Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T et al. (2014) Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses. Front Comput Neurosci 8:31
Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V et al. (2012) The Unlock Project: a Python-based framework for practical brain-computer interface communication ""app"" development. Conf Proc IEEE Eng Med Biol Soc 2012:2505-8