Emotions are essential to human life. They directly influence human perception and behaviors, and have big impacts on people's daily tasks, such as learning, social interaction, and decision-making. Automatic emotion recognition has found applications in many domains such as human-computer interaction, human-robot interaction, multimedia retrieval, social media analysis, and healthcare. Emotional states are expressed through a variety of channels including facial expression, voice prosody, spoken words, and body gestures. Automatic emotion recognition in real-world applications is a challenging task. Real-world emotions involve subtle expressive behaviors, different degrees of expressiveness in different channels, and the imperfect conditions such as background noise or music, poor illumination, and uncontrolled head poses for example. This EArly-concept Grant for Exploratory Research project aims to address the challenges of spontaneous emotion expressions and imperfect audio and video signals in-the-wild, and develop a novel multimodal emotion recognition system for real-world applications. The research will lead to advances in data collection, algorithm design, and bench-marking for the next generation of affective computing.

This project consists of several research components. First, a multimodal dataset of spontaneous emotion expressions in-the-wild will be developed. The dataset will contain natural spontaneous emotion data in various challenging real life environments, and crowd-sourced ratings in different modalities (audio and video channels). A thorough benchmark analysis using this dataset will be conducted tostudy how different features, modalities, and signal impairments contribute to the success and failure of emotion recognition systems. Finally, novel multimodal emotion recognition algorithms will be designed using adaptive and robust multimodal learning and fusion. The research findings will be made available through dataset sharing, publications, talks, and open-source codes, allowing a multitude of developers, researchers, and companies to improve and evolve multimodal emotion recognition in real-world applications. The project will also provide novel research opportunities for graduate and undergraduate students, including women and minority students.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-08-01
Budget End
2021-07-31
Support Year
Fiscal Year
2020
Total Cost
$99,916
Indirect Cost
Name
New York Institute of Technology
Department
Type
DUNS #
City
Old Westbury
State
NY
Country
United States
Zip Code
11568