This project will explore how people construe the representational states of robots, and the cognitive and perceptual basis for this construal, particularly with respect to vision. Specifically, a series of experimental studies will explore people's beliefs about a highly anthropomorphic humanoid robot named ISAC. The most basic experiments will ask whether subjects overestimate ISAC's ability to see visual changes. Follow-ups will explore the degree to which these misunderstandings affect assumptions that might underlie on-line human-robot interactions, and this research will explore the perceptual basis for invoking an anthropomorphic model.
People make systematic mispredictions about visual experience, vastly overestimating their own and others' ability to see visual changes, and further, these overestimates also apply to mechanical representational systems such as computers when they are described as having anthropomorphic beliefs, goals, and intentions. Research has also begun to identify specific perceptual cues that may serve to bootstrap knowledge about representations. The focus of this project is both to understand intentional vision in the human users of robotic systems, and ultimately to use this understanding as the basis for improving the artificial intelligence (AI) underlying the robots' processing of human users' intentions.
The research represents a novel application of insights gained from cognitive development to understanding how adults construe mechanical representational systems. As such, it not only has the potential to advance our understanding of adults' models of representation and the perceptual basis of these models, but it also has the potential to guide the development of the AI underlying human-robot interaction. In particular, this research will isolate situations in which user models of humanoid robots may diverge from reality, and specify an ecologically valid basis for AI programming that can structure the coding of intentional human action.
This research will not only enrich the existing collaborations between the cognitive science and engineering communities at Vanderbilt University, but it will also have a broader educational impact. Testing these ideas in the context of a humanoid robot will also provide a compelling context for both graduate and undergraduate students to consider basic questions of representation and mind, and it is expected that Vanderbilt undergraduates will play a crucial role in assisting with this research.
Humanoid robots are currently being developed to fill real-world functions ranging from household chores to elder-care. Among the challenges these devices pose, perhaps the most difficult is the need for a two-way understanding between the robots and their human users. Not only do humans need to understand robot capabilities and representational states, but robots require the same understanding of humans. This is particularly true if robots are to have productive and flexible interactions with humans, a process that requires a careful alignment of understanding that is dynamic enough to coordinate a complex flow of changing circumstances, beliefs, desires, and intentions. This research will strengthen the scientific basis for efforts to improve human-robot interaction.