The research goal of this project is to develop new techniques for producing realistic and parameterized humanlike gestures based on motion data acquired from real performances. This work proposes novel computational models and interactive interfaces, and focuses on whole-body demonstrative gestures for interactive training and assistance applications with autonomous virtual humans.
Although gesture modeling has made substantial advances in recent years, less attention has been given to parameterized demonstrative gestures which can be modified to refer to arbitrary locations in the environment. This particular class of gestures is critical for a number of applications. Typical examples of such gestures include pointing to and demonstrating how to operate particular devices or objects. To ensure the system's effectiveness, this project includes cognitive studies for guiding the development of the computational gesture model. To ensure the achievement of realistic results, motion capture data obtained from real performers executing gestures for demonstration tasks within real scenarios will be employed. The proposed framework will also account for gestures captured interactively from a low-cost wearable set of motion sensors, enabling the interactive customization of gestures needed for programming interactive virtual human demonstrators for a broad range of applications.
This project will significantly advance research on gesture modeling with two key contributions: (1) a novel computational model which integrates blending of realistic full-body gestures from motion capture with motion modification techniques for achieving precise arbitrary placement of the hands at the gesture stroke time, and (2) a new user interface for gesture modeling based on direct demonstrations which will truly enable a seamless human-centered user interface for programming autonomous characters. The approach constitutes a substantial step toward achieving autonomous virtual assistants which can meaningfully and effectively demonstrate tasks and procedures. The interactive interface component of this project has the transformative potential to enable gesture programming to become accessible to the non-specialized user, and therefore to enable virtual humans to become widely employed as a powerful communication medium.
This project has the potential to impact the basic and broad research problem of modeling human movement and cognition, which is a central topic in information technology. This project will in addition benefit other researchers by producing a unique type of demonstrative gestures database which will be made available from a public project webpage. It will also provide unique educational opportunities for students and contribute to interdisciplinary educational programs based on new courses being developed or improved around the topics of the research.