Tele-immersion is an emerging technology that enables interaction of users, engaged in activities such as training of tai-chi movements, dancing, or assistance in physical therapy, between geographically distributed sites. This is achieved through realistic video and sound reconstruction of the activities in three-dimensional (3D) space in real time. Several components of these tele-immersive environments need to be in place to achieve seamless immersive experience: (a) set of cameras covering the 360 degree space in which the activity is taking place, hence creating 3D data sets, (b) sound system recovering sound from 360 degrees without echo, (c) broadband networking technology that enables throughput of large data sets across geographically distributed sites (end-to-end) with minimal latency and synchronous delivery, and (d) display technologies to display data from different sites in a consistent fashion. Many components of the tele-immersive environments have been developed, and preliminary small scale experiments with individual and customized components have been performed.
What is missing is a holistic deployment and experimentation with the current existing tele- immersive components. In this one-year project, we aim to (1) integrate current COTS components that are available on the market for tele-immersive environments, such as the current 3D cameras, wireless sound system, Internet2 protocols, and existing plasma and projector-type displays, (2) deploy the integrated COTS components at two geographically distributed sites, UC Berkeley and University of Illinois at Urbana-Champaign (UIUC), and (3) test and execute preliminary experiments with our holistic integrated system, called Tele-immersive Environment for EVErybody (TEEVE). This integration and deployment will present very unique results because they will enable testing of the existing COTS components and their capabilities and limitations to deliver cutting-edge technology to broader audience. We expect also that the integration and deployment will reveal future research challenges since we plan to perform extensive experimental analysis of the TEEVE system. Our experimental analysis will span from network-based measurements where we plan to measure and evaluate accomplished bandwidth across Internet2 when sending large number of audio and video streams, accomplished latency, jitter, loss rate, response time, compression ratio on real-time 3D data, vision-based measurements where we plan to measure the correctness of existing 3D reconstruction algorithms in terms of semantics and real-time behavior, to user-based measurements where we plan to experiment with the tai-chi teacher at the UC Berkeley side and question students at the UIUC side about their experience versus using a standard video tape.
The intellectual Merit rests in (1) the extensive integration of COTS components which will consider and analyze the multiple configurations and synchronization mechanisms possible in our multi-tier system, (2) the deployment and experimentation with the integrated system, measuring the various metrics at different system/user levels, and (3) integration guidelines and setup methodologies how to put together such a holistic tele-immersive infrastructure like TEEVE. Experiences and experimental results will be shared with the research community in the form of documentation, software, and tutorial(s) at conferences such as ACM Multimedia, IEEE International Conference on Multimedia and Expo (ICME) and others.
Broader impacts will derive from the fact that will be the first holistic deployment of an affordable tele-immersive Internet-based infrastructure. TEEVE is the first step to allow sharing of activities that require full body visual and auditory information across geographically distributed sites. Examples exist in health care such as physical therapy to users in remote locations (e.g., Indians, Eskimos), in training of sport activities such as tai-chi, or in art training such as dancing.