This project extends three different research domains: ideomotor theory, virtual environments and tangible interfaces, and seeks to augment human creativity in a fundamental manner. A central component of artistic creativity is the ability to imagine and perform new actions. The ideomotor theory in cognitive science suggests that imagination and action share a common coding in the brain. This project extends body memories using virtual environments and tangible interfaces. The premise is that augmenting creativity involves expanding the number of solutions one can generate, i.e. broadening the creative space. The project develops an environment that includes video game characters that encode our body movements through tangible user interfaces as a way to broaden the body memories of the player. The project is multidisciplinary, and has a range of possible broader impacts, scientific, technological and social. On the scientific front, the project would contribute to a better understanding of the cognitive mechanisms underlying imagination, motor learning and action perception. This has potential implications in areas such as conflict resolution, medical rehabilitation and educational technologies. A variety of applications are possible on the technology front, including template combos of movements that generate specific character impressions in viewers, and algorithms that isolate and accentuate movement features that humans perceive as biological. On the social front, once movement patterns become ?swappable?, it would be possible to share entire personalities online by sharing whole body movement patterns. This could lead to more refined understanding and empathy for others.

Project Report

Field: We combined approaches from tangible interfaces, virtual worlds, and cognitive science to investigate whether we can expand a person’s creative expression space and body memory through new human computer interfaces. The approach builds on common coding theory, which argues that our action codes are tightly coupled to the perceptual codes that are associated with the effects those actions have on the world. As a result of this tight coupling, the perception and imagination of actions involves the implicit activation of our motor system. Simplified, the model argues that we understand somebody else’s movements and imagine our own actions through our own body memory. Approach: Through a range of experiments, we tested whether the common coding effect applies to the way we read and connect to a virtual avatar under our control. First, we successfully showed that participants are able to identify their own recorded movement patterns even if they are heavily abstracted or mediated through an embodied control interface, like a puppet. For the second step, we built a puppet-like interface that allows interactors to control a 3D character in a virtual environment. Using this interface and a basic game-like setting in the 3D world, we demonstrated that interactors are able to recognize their own performance in the abstracted movements of their avatar that acts like a virtual puppet. Once this connection was clear, we tested whether we can use this own body movement recognition in the movements of the avatar to improve an interactor’s cognition in a specific task. We ran a comparative test that showed that our puppet interface outperformed other game-like interfaces available to date in stimulating participants for a mental rotation task. Our last experiment manipulated the control scheme of the interactor to the avatar and found that such a manipulation affected the manner in which they performed a creative sketching task. Relevance: The findings of this work are relevant for improvements of human-computer interaction and also provide insight into the role of the motor and cognitive system in our interactions with digital media. We also conducted initial pilot tests of our interface with stroke patients, which showed that our interface could be useful as a tool for them to connect to a virtual avatar and its performance. Because the system supports extension of one’s body memory, this indicates that it has promise as a possible tool for rehabilitation of patients who need to (re)learn specific movement patterns, such as stroke patients or patients with certain brain injuries. The system is based on affordable hardware technology and widely known game-like environments, which makes it a possible option for a home-based rehabilitation system.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0757370
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
2008-08-01
Budget End
2011-07-31
Support Year
Fiscal Year
2007
Total Cost
$261,817
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332