A longstanding goal in artificial intelligence is to develop smart systems that interact well with humans. Advances in sensing and machine learning are increasingly allowing computers to infer mental states, raising questions about how agents might use those inferences to adapt to human partners. This project will systematically address how to design and evaluate "affect-aware" systems that adapt their behavior based on estimates of their users' emotional experiences. The team will first look at the effectiveness of current strategies that vary the difficulty of educational tasks and games based on inferred affect. They will then develop new strategies that take into account both individual personality and dynamic characteristics of the physical environment. Finally, they will evaluate these strategies, paying particular attention to what happens when systems act on incorrect inferences about affect. These studies will help pave the way toward self-driving cars, conversational assistants, and virtual reality characters that consider affect when interacting with people, ideally leading to better experiences and outcomes. The team will also develop new interdisciplinary courses in human factors and human-computer interaction, connecting with industrial partners to help train students in both the practice and research of these kinds of adaptive systems. Further, they will do public outreach about these systems and use them to provide summer research experiences for K-12 and community college students, focusing on those from groups traditionally underrepresented in computing.

The project will be structured as a series of lab studies, using spatial cognition games and robot-assisted motor rehabilitation tasks as testbeds that allow the team to directly manipulate task difficulty and measure enjoyment/engagement and performance/learning outcomes. The team will first collect training data with people using the testbeds at randomly selected difficulty levels and reporting the perceived level of difficulty as too easy (bored), too hard (frustrated), or about right, while capturing heart rate signals, skin conductance and temperature, electroencephalogram (EEG) data, and environmental factors including light, time of day, and room temperature. These will be used to train affect recognizers using a variety of machine learning methods: linear discriminant analysis (including a Kalman adaptive version), support vector machines, neural and Bayesian networks, and random forests. Using a common adaptation strategy that adjusts difficulty up or down one step, the team will measure the enjoyment and performance outcomes that affect-aware recognizers achieve both with and without considering environmental factors, comparing those to a baseline strategy that adapts difficulty based only on task performance. During these experiments, the team will also collect data about users' personality characteristics and use those to develop individualized recognition models and adaptation strategies for different personality types. These individualized models and strategies will be evaluated by comparing them to the baseline data from the first experiment. Finally, they will compare the outcomes of these systems with those from a "best-case" system controlled by humans and a "worst-case" error-prone system that chooses adaptation strategies randomly, looking at those induced error rates along with the natural error rates captured during the other experiments to determine the effect of recognition and adaptation error on satisfaction and task outcomes.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1717705
Program Officer
Balakrishnan Prabhakaran
Project Start
Project End
Budget Start
2017-08-15
Budget End
2020-07-31
Support Year
Fiscal Year
2017
Total Cost
$447,889
Indirect Cost
Name
University of Wyoming
Department
Type
DUNS #
City
Laramie
State
WY
Country
United States
Zip Code
82071