Brain-computer interfaces (BCIs) are increasingly being used in both scientific research and therapeutic intervention applications for severely disabled patients. Typical usage of such devices provides continuous control over computer cursors and/or selection of discrete items using brain activity recorded from the scalp (electroencephalography), on the brain surface (electrocorticography), or inside the brain (intracortical microelectrode recording). As therapeutic devices, BCIs have been very successful in restoring communication abilities to paralyzed and mute patients, most often by discrete-choice letter spelling. In one study, real-time control of a speech synthesizer was employed for communication by BCI. However, this study relied on invasive neurological recordings, which is impractical for widespread speech therapeutic application. The proposed project addresses this knowledge gap by investigating performance of a BCI designed to map non- invasive EEG sensorimotor rhythms into a low-dimensional speech representation, specifically formant frequencies, for control of a continuous vowel synthesizer with instantaneous auditory and visual feedback. A prototype device has been developed and used by a pilot subject to control a two dimensional formant frequency vowel synthesizer. This device will be used in a human subjects study for evaluation of BCI performance in vowel production tasks to address two major research topics:  the effect of feedback modality (auditory vs. visual) on learning BCI control and  investigating long-term retention of performance using the BCI device. The goal of this research study is to illustrate the feasibility of meaningful auditory feedback (i.e. vowel sounds) as an appropriate feedback mechanism for a speech BCI. This technology will ultimately benefit patients with severe communication impairment, especially for those whom invasive BCIs are not viable options. The results of this research will also provide a basis for future BCI designs using low- dimensional speech synthesizers for continuous production of both consonants and vowels using non-invasive EEG. The long-term research goal, of providing continuous artificial speech (both vowels and consonants) synthesis in real-time for restoration of communication, will be a significant advance in our ability to improve the patients'quality of life and allow them to better engage in social interactions.
The proposed research will improve our understanding of the feedback control mechanisms required for optimal communication using a brain-computer interface. The outcome of this research has the potential to improve the quality of life for mute, paralyzed patients, or others lacking the ability to speak and help inform development of improved brain-computer interface applications for communication.
|Brumberg, Jonathan S; Krusienski, Dean J; Chakrabarti, Shreya et al. (2016) Spatio-Temporal Progression of Cortical Activity Related to Continuous Overt and Covert Speech Production in a Reading Task. PLoS One 11:e0166872|
|Lotte, Fabien; Brumberg, Jonathan S; Brunner, Peter et al. (2015) Electrocorticographic representations of segmental features in continuous speech. Front Hum Neurosci 9:97|
|Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T et al. (2014) Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses. Front Comput Neurosci 8:31|
|Brumberg, Jonathan S; Lorenz, Sean D; Galbraith, Byron V et al. (2012) The Unlock Project: a Python-based framework for practical brain-computer interface communication ""app"" development. Conf Proc IEEE Eng Med Biol Soc 2012:2505-8|