Modeling believable human characters in the virtual environment is crucial for unleashing the potential of computers as an interactive medium. A realistic depiction of human motion drastically enhances the perceptual immersion for task training, social communication, or tele-learning. In a believable virtual environment, a human character must be able to plan its locomotion, react to the environment, and engage other virtual characters or real humans with realistic display of emotion and personality. This daunting task requires simulation from the musculoskeletal system to the nervous system, as these intricate components must coordinate in synchrony when a functional activity is undertaken. Physics-based character animation today can simulate complex locomotion sequences with stunning visual realism, albeit under the careful guidance of the animator. What is missing from most character animation techniques is autonomous control, both conscious and unconscious. In this project, the investigator expands computer-generated character animation from a visualization tool to an interdisciplinary research area focused on autonomous control and realistic locomotion.
This project focuses on two inter-related research thrusts fundamentally aligned with the interdisciplinary nature of the investigator's research theme: (1) synthesis of non-ballistic, low-energy human locomotion, and (2) synthesis of interaction among multiple characters. Unlike ballistic and highly dynamic motion, non-ballistic motion is not stringently determined by physics, thus requiring more sophisticated models to address the complex interplay of unconscious autonomous control, such as physiology and emotion, and conscious autonomous control, such as the intents of the character. The autonomous control becomes even more crucial when multiple character interaction comes into play. Synthesis of interaction is challenging because the high-level artificial intelligence algorithms and low-level locomotion intertwine in an unpredictable manner when the interaction takes place. Beyond the scope of computer animation, this computational model can provide a device for recognizing emotion, diagnosing musculoskeletal anomalies, or validating biomechanical hypotheses.