This project studies creativity in improvisation in both standard theatrical techniques where a script's interpretation, including the physical performance, is improvised by an actor, and "improv theatre" where entire scenes are created by actors in real-time through improvisation. This research will increase the state of knowledge about improvisation, creativity, and intelligent agent design, as well as contribute meaningfully to theoretical and academic understanding of creative practice in theatre. In addition to integrating engineering scientific methods with the theory and practice of acting, this research will contribute cognitive and computational models of improvisation, emotion, acting styles, and problem-solving in the context of theatre. These models will pave the way to the development of more sophisticated synthetic characters, virtual humans, and other intelligent autonomous agents that can interact with humans and with each other for purposes of entertainment, education, and training.

Project Report

This project, called The Digital Improv Project, has focused on the study of human co-creativity and improvisation. Improvisational stories are a highly interesting example of human creativity, with performers constantly making creative, collaborative decisions in real-time under severe constraints. We have studied improvisational actors over the course of several years to better understand the socio-cognitive processes that they employ in co-creating stories. We have applied this understanding to the construction of formal artificial intelligence programs that represent our formal understanding. The rationale is that if we can build formal models of the socio-cognitive phenomenon we are examining, then we have a solid understanding of it. Our findings have resulted in better understanding improvisational narrative construction and shared meaning making. The process of building shared meaning in a group is called building a "shared mental model". This process is broadly ubiquitous in human co-creative settings. It involves the continuous process of each individual recognizing differences in mental models in a problem solving environment, attempting to reconcile those differences, and monitoring to see if the repair was successful. Our socio-cognitive studies have resulted in the development of multiple AI-based improvisational theatre applications. We have created the "Party Quirks" installation as a means of representing the process of performing characters in an improvised scene. Users would interact with improvisational characters on a virtual stage via an iPad interface. They would play as the host of a party, trying to understand the quirky characters that each AI actor was portraying by interating with them via share mental models moves. This work was accepted into the 2011 Chicago Improv Festival as a normal "troupe entry" and demoed for hundreds of attendees at the festival. Our second piece, "Three Line Scene," examined how to collaboratively construct the introductory details for a scene with AI actors through movement-based interaction using a Microsoft Kinect. We created an open-ended user experience where the user could suggest different characters and situations with body gestures. The AI actor would sense the movement, reason about its relation to things it knows about the story world (in this case, the Old West), and decide how to proceed to create a shared mental model about the scene. The AI may decide on a single interpretation of user behavior, decide to pick a particular interpretation within an ambiguous set of possible interpretations, or execute a shared mental model move to try to get on the same page with the user. The Digital Improv Project has contributed to a better understanding of us creative entities. We have formalized our findings in the form of interactive AI-based installations, providing both a fertile ground for exploring co-creation using different human-computer interfaces (e.g. the iPad and Kinect) and the generation of creative content by human / computer teams. More information about this work, and our subsequent research on human co-creativity, can be found at http://adam.cc.gatech.edu.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0757567
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
2008-07-01
Budget End
2013-06-30
Support Year
Fiscal Year
2007
Total Cost
$529,894
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332