This project will explore approaches to artificial intelligence that can support creative digital filmmaking, an extremely rich new form of expression and communication. The most accessible variant of digital filmmaking is "machinima" - cinematic movies created by manipulating avatars in 3D computer game worlds. Due to the allure of cheap, quick, and easy movie making, and the accessibility of high-fidelity graphics through video games technologies, machinima has grown into a mainstream form of creative expression and sharing. However, machinima has a high threshold of entry. This is due only partly to technical tools, which are cheap and easily acquired; digital filmmaking also has a high threshold of skill requirements. In general, creativity is collaborative, with creators often seeking feedback and critique from others. Intelligent systems can also participate in the feedback loop of creative practice by suggesting, autonomously creating, and critiquing digital media.

The goal of this research is to reduce the technological and skill barriers to complex, but rich forms of digital expression such as filmmaking, thereby increasing the creative productivity of amateur creators. Its approach is to develop digital media production tools that are instilled with computational models of creative practice and intuitive interfaces informed by empirical studies. The anticipated result is a greater understanding of creative processes involving feedback and critique, models of cognitive and emotive processes in human recipients of creative artifacts, and understanding about the tradeoffs of interface modalities involving intelligent participatory systems. The project is organized around two major, interrelated thrusts: (1) develop cognitive and computational models of feedback and critique as a means toward intelligent systems that participate in creative endeavors; (2) study how the creative abilities of amateur and expert digital filmmakers are affected by production interfaces along dimensions of (a) degree of constraint in cinematic control and (b) modes of intelligent participatory support.

It is anticipated that the resultant models and implementations will serve as next-generation creativity support tools to be adopted by the amateur digital filmmaking and machinima communities. By achieving its research goals, this project will demonstrate a technique for lowing the threshold of entry to a form of digital media creation. Lowering the threshold of machinima production, in particular, will open the practice to populations of users historically underrepresented in computing such as women, who are attracted to storytelling but often discouraged by highly technical "hacker" skills. As an expressive form, digital filmmaking is a powerful medium for communication, can be used as a draw to computing, and can be integrated into a wide repertoire of activities including entertainment and education. Resultant models and implementations may also impact the growing practice of previsualization in the movie and television industries. The approach will result in a model for incorporating intelligent creative assistance into other forms of expressive digital media.

Project Report

Machinima is a new form of creative digital filmmaking that leverages the real time graphic rendering capabilities of video game engines to create high fidelity animations. In its most rudimentary form, a machinima film is a recording of scripted video game characters with audio overlay. However, the tools of the trade have expanded beyond the initial confines of early video game engines to introduce much more complexity and nuance and include control of lighting, set and character design, and cinematography. This creative reuse of technology has opened a door to individuals with no experience in animation or filmmaking to create professional looking animated films. The challenge is to provide assistive technology to these amateur users that fits their practices. The project, Assistive Artificial Intelligence to Support Creative Filmmaking in Computer Animation, explored approaches to artificial intelligence that augment creative artistic endeavors of machinima producers. Based on the results of a study of expert Machinimators (people who create machinima), we developed the Distributed Exploratory Visualization (DEV) model of human creativity. The DEV model highlights the importance of visualization tools in the expert creative process. Expert creators begin with mental sketches (pre-inventive) of their desired goal. They use tools to visualize these ideas but the tools also help them explore new ideas and produce new goals because the tools constrain the creative process but also provide affordances to try new variations. As the creative process cycles between structured and exploratory processes, the created artifact is constantly evaluated against the creator’s goals. A study of amateur digital filmmakers—those without previous experience—demonstrated that amateurs regularly violate cinematic conventions that experts believe meet the common expectations of viewers. Cinematic conventions involve how the camera is placed relative to action in a movie scene. The study showed that a simple rule-based artificial intelligence that can detect convention violations and provide feedback in the form of warnings could significantly increase the overall quality of digital films produced by amateurs. The intelligent feedback system did not indicate how to correct the violation but provided information about the violated convention. One important conclusion is that amateurs do not want or need automation as part of the creative process, but can benefit from a helpful, automated critic. We have developed a tangible and gestural user interface for amateur digital filmmakers. The system uses the Kinect 3D camera to track user’s hands above a tabletop surface display and control a virtual set with physical props. This allows users to directly manipulate virtual cameras and set designs in 3D space. Physical figurines can be used to manipulate the virtual actors. To include a editing feature, virtual cameras are arranged on a timeline and scrub back and forth in time to see a real-time rendering of their digital film on tabletop interface. The focus remained on rule-based critic highlights camera placements that violate conventions. A user study showed increased engagement over a conventional desktop display alone, although the value of the gestural interface to the user varied based on the user’s familiarity with the domain we tested for, 3D image and content manipulation. To go beyond supporting amateurs with simple cinematic conventions, an intelligent creative support system needs to understand the semantics of scenes. In particular, intelligent creative support systems need to be able to predict when viewers of films or readers of stories will have emotional responses such as suspense. We developed a computational model of suspense, called Dramatis, which is capable of predicting the suspense response of the typical story reader. Given two versions of a story, Dramatis accurately chooses which one will be perceived by humans as more suspenseful. This is a potentially useful capability for providing feedback to story authors and digital filmmakers as part of their creative loop.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1002748
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
2010-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2010
Total Cost
$695,485
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332