This project develops new technologies that measure and model people's state of attention and applies these methodologies to virtual reality (VR) and augmented reality (AR) language learning applications. For determining the degree of people's attention on central learning tasks presented in VR and AR, this project uses two sensor modalities that can index attention: electrical activity of the brain, as measured by electroencephalography (EEG) signals, and eye gaze behavior, as measured by eye trackers. Context recognition plays a key element in future VR and AR application scenarios, and users, as well as content providers, can benefit substantially from information about user attention states during information consumption. The project can inform the development of optimized VR and AR content, as well as individualized learning strategies. The project's motivating application is the optimization of language learning for users across the complete spectrum of ability. In the longer run, additional benefits include the creation of special tools for students with known attention deficits, as well as for increasing productivity and safety in various commercial and industrial applications.
This research explores necessary novel technologies for attention-aware mixed reality (MR) interfaces. The project integrates signals from consumer-grade EEG and eye tracking devices, to determine if and how much the participant's attention is divided (i.e. distracted or multi-tasking) or focused, and to assist appropriately. In this attention-assisted paradigm, users are monitored by EEG and eye tracking devices while interacting with mixed reality user interfaces. Attention states are classified over time using both sensor modalities and can be spatially referenced in the user interface with eye tracking. Attention activity feedback can be reported in real time while the user is interacting with the interface or may be stored and later visualized for a more thorough analysis of attentional patterns. The project delivers technology demonstrations of attention-aware interfaces for foreign language vocabulary learning in VR (with a lookout on AR possibilities). Exploratory experiments will uncover the possibilities for characterizing human attention states through EEG and eye tracking data. The project opens new opportunities for advances in multi-modal interaction to contribute to MR interfaces and learning technologies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.