This project aims to tackle a fundamental question at the heart of human-technology partnership: How can designers facilitate the establishment of appropriate trust in technology? Advanced technologies such as autonomous vehicles and collaborative robots are entering every sector of the economy and will fundamentally alter the way people live and work. However, realizing the full economic, safety, and health potential of these technologies is only possible if people establish appropriate trust in them. This project aims to understand and model the formation and evolution of trust, and to develop adaptive autonomy that facilitates the establishment of appropriate trust. The project advances STEM education and workforce development by nurturing the next generation of scientists in human-autonomy interaction, and by developing outreach activities for K-12 students with an emphasis on increasing participation of women and underrepresented minorities, and for working professionals aimed at helping them adapt to the future workplace wherein humans and autonomous agents will increasingly work as a team.
The research work has three main thrusts: (1) modeling temporal dynamics of trust formation and evolution; (2) estimating a person's trust in autonomy from behavioral and physiological information; and (3) developing methods that enable the autonomous agent to adapt its behavior and guide a person toward a more desired level of trust required for successful operation. The project will conduct multiple human-in-the-loop studies on a platform with humans interacting with autonomous drones in several situations, including search and rescue where a sequence of tasks and decisions are required. Based on the data, trust dynamics models, trust inference algorithms and adaptive methods will be built, tested and validated.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.