In this project the PI will address the challenge of empowering people with severe motor and speech impairments (SMSI) to socialize through written and spoken language, by increasing communication rate through a novel and intuitive computer interface. Available augmented communication technologies for the SMSI population typically yield speeds on the order of just one word per minute (based on clinical experience). The PI's objective is to develop an EEG-based brain interface technology based on an intuitive icon-based language generation framework, RSVP iconCHAT, which will achieve increased communication rates for the target population. This technology will exhibit three essential features: rapid serial visual presentation (RSVP) of icons that represent words; a large-vocabulary natural language model with the capability for accurate predictions of intended text in order to control the upcoming sequence of icons to be shown to the subject for confirmation in the RSVP paradigm; and an intent detection mechanism that fuses information from multichannel electroencephalography (EEG) and the generative probabilistic language model. Advanced statistical signal processing, machine learning, and natural language modeling techniques will be employed to achieve communication rates over an order of magnitude higher than the current state-of-the-art. The project will also contribute novel techniques and algorithms for synchronous brain interface design, particularly single-trial ERP detection. Both the brain interface and language model components will learn from previous interactions with the user and exhibit robust cooperative learning behavior in order to maximize language throughput. A Bayesian and information theoretic foundation will support adaptability. The PI notes that his approach is innovative along three dimensions: an intuitive icon-based language representation combined with context-dependent language models will be employed for message construction; a noninvasive brain computer interface that is user-adaptive will be developed and employed to interface with the icon-based platform; and methods for probabilistic information fusion between the brain activity measured by the BCI and the predictive language model will be developed.

Broader Impacts: There exists a significant SMSI population due to various reasons such as cerebral palsy (CP), neuromuscular disease (Amyotrophic Lateral Sclerosis, ALS), and severe spinal cord injury leading to locked-in syndrome (LIS). These communities rely on inefficient modes of communication that limit the user's ability to generate acceptable communication rates. Successful achievement of this project's goals will not only provide the target population with an improved face-to-face communication experience with their able-bodied communication partners, but will also enable control of their environment and access to information. In addition, the work will contribute to information fusion from different modalities, optimal data dimensionality reduction, single-trial ERP detection, and human computer communication through a novel interface. Data collected in experiments will be made available to other researchers in order to accelerate verification of outcomes and dissemination of results.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0914808
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2009-09-01
Budget End
2013-08-31
Support Year
Fiscal Year
2009
Total Cost
$504,098
Indirect Cost
Name
Northeastern University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02115