The traditional vision of human-robot interaction is that the machines will be fully cooperative partners. Correspondingly, issues of robot disagreement have never been explored. Using a variable-based approach, the effects of robot autonomy, robot form, and robot politeness strategies on human behaviors and attitudes will be empirically tested. Behavior measures will include performance, physiological responses, and memory. Attitudinal measures will include affective responses as well as various assessments of the robot. Results will enable the discovery of which aspects of human-human interaction apply directly to human-robot interaction, and which aspects are different with respect to performance, memory, and attitudes.
This exploratory research project will seek to empirically identify features of robots that influence humans' responses to robots' expressions of disagreement. Results are expected to identify strategies to facilitate the resolution of conflicts between humans and robots. While there are models of human-human disagreement, it is unknown which of these models will be applicable to human-robot interaction. This is an important exploratory area to pursue given that in many contexts, such as space exploration and colonization, rehabilitation, and complex manufacturing, the robot must express disagreement with the human, a highly-charged situation. This research will provide initial answers to the following critical questions: 1) which strategies of disagreement will be most effective and most palatable to human interaction partners? and 2) which characteristics of robots will most effectively enable the situation to be one of joint understand rather than pure conflict? It is expected that findings will enable researchers to create and study robots that are better able to coordinate with humans and assist humans in reaching their goals as well as reveal the ways in which robots can induce social responses to technology.