One of the most important applications for Brain-Computer Interfaces (BCIs) is in individuals who are almost completely paralyzed but whose cortical processes are intact, such as patients with Locked-in Syndrome. For these people, communication can be laborious if not impossible without the aid of a BCI. Most current non-invasive communication BCIs use an indirect approach involving neural signals unrelated to speech to accomplish spelling or typing tasks. However, recent research into direct speech BCIs holds the promise of faster communication speeds because the decoding algorithms utilize neural activity related to more natural communication, thus reducing the burden of effort for the user. This study will take the first steps towards developing an electroencephalography (EEG) based direct real-time speech BCI to restore communication to severely impaired patients. Specifically, this study will develop the ability to detect when a user is attempting to operate the BCI device or is at rest and the ability to decode differences in EEG patterns of activity between imagined vowels. This will involve (1) developing and optimizing analytical methods to decode EEG signals resulting from imagined movements in a cued paradigm, (2) benchmarking the analytical methods against current methods in the field by applying them to well known imagined hand movement paradigms, (3) applying the analytical methods to speech-related movements (imagined vowel productions), and (4) extending the analytical methods to decode signals collected in a self-paced paradigm. In order to achieve these aims, this study will record EEG from healthy and paralyzed participants while at rest or imagining a hand movement or vowel production in response to a cue. Two primary features will be used to decode the EEG signals: (1) the amplitude of the sensorimotor rhythms (SMR) in the mu, beta, and gamma bands, and (2) the amplitude of the signal transformed using the first and last two Common Spatial Patterns (CSPs) associated with the different experimental conditions. A number of classifiers will be applied to the data offline to decode imagined movements from each other and from rest. Classifier performance will be determined through cross-validation. Once the classifiers have been trained on the cued paradigm, they will be applied offline to a self-paced paradigm in which participants are given a fixed amount of time to imagine repeating a movement a given number of times. The classifiers will be judged based on how well they can predict the number of times each movement was repeated. The same analytical methods and experimental paradigms will be applied to both imagined hand movements (left or right fist clenching vs. rest) and to imagined vowel productions (/a/ or /u/ vs. rest). In a preliminary study in which one participant performed the cued paradigm, a 31-parameter logistic regression classifier was able to classify right vs. rest, left vs. rest, right vs. left, /a/ vs. rest, /u/ vs. rest, and /a/ vs. /u/ at levels above chance for the testing data, illustrating the feasibility of the proposed research.
The results of the proposed research will provide the first steps toward developing a non-invasive direct speech brain-computer interface (BCI) for communication. Such an interface has the potential to greatly improve the communication abilities and quality of life of individuals with near-total paralysis and anarthria but whose cognitive functions are intact, such as those with Locked-in Syndrome.