Virtual and Augmented Reality headsets may soon become a major mass-market mobile consumer electronics device. However, there are challenges in making these headsets more portable and less power hungry, and in enabling a more immersive user experience by increasing awareness of the user's attention and emotional state. This project's goals are two-fold. The first goal is to enhance systems' ability to track the eye in a low-power yet robust manner by exploring new eye tracker designs combined with machine learning approaches. The second goal is to study methods to infer facial expressions and user emotion from wearable headsets by combining multiple sensing modalities including facial muscle activity and cameras. The project will also have educational impact on middle-school students and under-represented students through workshops.
The project involves activities and innovations in several areas. On the eye tracking side, the project will explore new hardware designs with stereo cameras and machine learning approaches to enhance robustness to face and eye shapes as well as eyeglass movements. On the facial expression sensing side, the team will explore multimodal sensing methods that combine electrooculography (EOG) and multiple camera views to infer expressions as well as to reduce power consumption. On the networked systems side, the project will look at leveraging eye tracking and facial expression sensing to enable wireless offload of rendering to a nearby compute node. The techniques to be researched can significantly lower the power consumed by AR and VR systems, enhance their ability to sense human expressions to enable more immersive experience, and reduce computational complexity by enabling predictive pre-fetch from a wirelessly connected edge cloud.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.