Brain-computer interfaces (BCIs) are increasingly being used in both scientific research and therapeutic intervention applications for severely disabled patients. Typical usage of such devices provides continuous control over computer cursors and/or selection of discrete items using brain activity recorded from the scalp (electroencephalography), on the brain surface (electrocorticography), or inside the brain (intracortical microelectrode recording). As therapeutic devices, BCIs have been very successful in restoring communication abilities to paralyzed and mute patients, most often by discrete-choice letter spelling. In one study, real-time control of a speech synthesizer was employed for communication by BCI. However, this study relied on invasive neurological recordings, which is impractical for widespread speech therapeutic application. The proposed project addresses this knowledge gap by investigating performance of a BCI designed to map non- invasive EEG sensorimotor rhythms into a low-dimensional speech representation, specifically formant frequencies, for control of a continuous vowel synthesizer with instantaneous auditory and visual feedback. A prototype device has been developed and used by a pilot subject to control a two dimensional formant frequency vowel synthesizer. This device will be used in a human subjects study for evaluation of BCI performance in vowel production tasks to address two major research topics: [1] the effect of feedback modality (auditory vs. visual) on learning BCI control and [2] investigating long-term retention of performance using the BCI device. The goal of this research study is to illustrate the feasibility of meaningful auditory feedback (i.e. vowel sounds) as an appropriate feedback mechanism for a speech BCI. This technology will ultimately benefit patients with severe communication impairment, especially for those whom invasive BCIs are not viable options. The results of this research will also provide a basis for future BCI designs using low- dimensional speech synthesizers for continuous production of both consonants and vowels using non-invasive EEG. The long-term research goal, of providing continuous artificial speech (both vowels and consonants) synthesis in real-time for restoration of communication, will be a significant advance in our ability to improve the patients'quality of life and allow them to better engage in social interactions.

Public Health Relevance

The proposed research will improve our understanding of the feedback control mechanisms required for optimal communication using a brain-computer interface. The outcome of this research has the potential to improve the quality of life for mute, paralyzed patients, or others lacking the ability to speak and help inform development of improved brain-computer interface applications for communication.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Small Research Grants (R03)
Project #
5R03DC011304-03
Application #
8532671
Study Section
Special Emphasis Panel (ZDC1-SRB-Y (56))
Program Officer
Miller, Roger
Project Start
2011-09-21
Project End
2014-08-31
Budget Start
2013-09-01
Budget End
2014-08-31
Support Year
3
Fiscal Year
2013
Total Cost
$134,168
Indirect Cost
$39,168
Name
University of Kansas Lawrence
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
076248616
City
Lawrence
State
KS
Country
United States
Zip Code
66045