The PI's objective in this project is to advance the field of robotic musicianship by enabling multi-modal communication between human and robotic musicians, and by embedding knowledge-based musical intelligence in real-time musical interactions. To these ends the PI will seek to understand how temporal structures in music are represented and processed by humans, and he will develop a vision system that enables a robotic musician to anticipate a human musician's gestures. Modeled after human-human musical interaction, the artificial vision system will complement the musical listening system developed in the PI's prior work, thereby allowing a robot to better synchronize its playing with human improvisers. In addition, the PI will transcribe and analyze (using statistical tools such as Hidden Markov Models) a large corpus of works by the great classical composers as well as by masters of jazz; the off-line analysis, coupled with interviews, surveys, and focus groups, will advance our knowledge regarding cognitive and mechanical aspects of group play and the role of visual and physical cues in live performance, and will enable the PI to construct a long-term cultural knowledge base that will be added to the short-term real-time improvisation techniques already developed previously. Collectively, project outcomes will lead to a comprehensive model of human and artificial musicianship, which the PI will evaluate using behavioral measures (such as time differences in synchronization), subject questionnaires, and a Turing-like test to evaluate the quality of the musical interaction.

Broader Impacts: This project will make fundamental contribution to our knowledge in areas such as musicianship, human-robotic interaction, computer assisted collaboration, and improvisation. It will help bring human-robot interaction to the general public through high visibility concerts that capture the interest and imagination of students who are not regularly drawn to math, the sciences, or engineering by creating novel musical collaborations between humans and machines. The project will serve as a testbed for future forms of musical interactions, bringing perceptual and algorithmic aspects of computer music into the physical world both visually and acoustically, which may inspire people to play and think about music in new ways. Ultimately, the research is expected to shed light on broader concepts such as human and artificial creativity and expression, and the feasibility for machines to create, or assist in creating, meritorious aesthetic and artistic products.

Project Report

Five robots have been developed as part of this project: A human-size improvising social marimba player robot named Shimon and four smaller copies of Shimi – a robotic music companion. The robots were designed to listen to and observe human musicians and respond with musical improvisation algorithms and gesture based social interaction. The project advanced the field of robotic musicianship by enabling multi-modal communication between human and robotic musicians and by embedding knowledge based musical intelligence in real-time musical interactions. As part of the project, we developed a vision system that allows robotic musicians to anticipate humans’ gestures, enabling synchronization and coordination of gestures with human improvisers. The project advanced human knowledge and understanding of musical cognition and interpersonal musicianship and led to rich and novel musical outcomes that were performed in concerts and festivals world wide. The user studies that were conducted as part of the project have shown that visual cues and enhanced robotic embodiment can significantly improve human-robot synchronization, coordination, and anticipation. The project has created broad impact in the public sphere as well as the research community, bringing human-robot interaction to the general public through numerous workshops and high visibility concerts. These workshops and concerts were designed to capture the interest and imagination of students who are not regularly drawn to music, math, sciences, and engineering. In addition, the project served as a pivotal point for the PhD program and Georgia Tech Center for Music Technology (GTCMT), helped to position GTCMT as the leading music technology center in the Southeast. The PhD program has been providing education and research in music, engineering, and computer science, as well as an academic home for creative graduate students, researchers, and faculty members interested in bringing together their scientific, technological and artistic skills. The project led to 4 journal papers, 8 conference proceedings presentations, and 18 concerts and public presentations at prestigious venues including the World Economic Forum in Davos, TED, SIGGRAPH, Ars Electronica, USA Science & Engineering Festival, Tech Crunch Disrupt, Google IO among others. The project also led to the filing of a provisional patent and the formation of a commercial company - Tovbot - which has raised $500K from a privet investor to commercialize and market the robot Shimi. The robot Shimon, on the other hand, became the face of Georgia Tech, featured in its Emmy winning Public Service Announcement. Both Shimon and Shimi received wide media coverage including the New York Times, Washington Post, NPR, BBC, Wired, CNN, the New Scientist, TV shows such as Discovery and the Colbert Report as well as blogs such as Tech Crunch, Engadget, and The Next Web.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1017169
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2010-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2010
Total Cost
$547,430
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332