The Virtualization Studio spearheads research to reconstruct, record, and render dynamic events in 3D. The studio creates a ?full-body? interactive environment where multiple users are simultaneously given a visceral sense of three dimensional space, through vision and sound, and are able to interact, through action and speech, unencumbered by 3D glasses, head-mounted displays or special clothing. The studio pursues the thesis that robust sensors for hard problems, in this case audiovisual reconstruction of highly dynamic multiple actors/speakers, can be constructed by using a large number of sensors running simple, parallelized algorithms. High fidelity reconstructions are created using a grid of 1132 cameras, and a 128-node multi-speaker microphone array to localize and associate multiple sound sources in the event space. In addition, a multi-viewer lenticular display screen, consisting of 48 projectors, and a front surround sound speaker are used to render interactive environments. The reconstruction algorithms are parallelized and a cluster is used to process the data and respond to behaviors in the event space in realtime.
Audiovisual reconstruction and rendering of scenes containing multiple users will revolutionize research into collaborative interfaces, and will allow digital preservation of culturally significant events, like theatrical performances, sports events, and key speeches. In addition to these core research objectives, the Virtualization Studio will act as a gathering place for multidisciplinary research, bringing together researchers from interactive art, human behavior analysis, computer graphics, computer vision, psychology, big data research, and speech processing. The infrastructure will be used to develop a new course on Human Virtualization, and will be used as a pedagogical tool in several existing courses and outreach projects, introducing the next generation of students to the power of interdisciplinary research in computer science.