Swindlehurst, Arnold Brigham Young University
Current audiovisual recording, communication, and playback provides a single, two-dimensional perspective of the world as it varies through time. Humans live in a four-dimensional" world, in which they move about (in 3-D space over time) at their own volition to experience the world from any perspective. Advances in sensor, computer, and networking technologies now make possible new systems that employ multiple cam- eras and microphones together with sophisticated processing algorithms to deliver unprecedented immersive recording and viewing capabilities. Sufficient sensing, networking, and computing power to practically ad- dress this vision already exists; the critical gap in achieving it is the lack of the necessary signal processing theory and algorithms. This research will develop new signal processing techniques for reconstruction of the audio and visual recording at an arbitrary location in space and time from multiple acoustic and video sensors, by extending recent research in adaptive beamforming, multisensor signal processing of non-stationary signals, and fun- damental new advances in multi-dimensional signal representation. Practical four-dimensional audiovisual recording, transmission, and playback, or emote reality", will be demonstrated with low-cost, conven- tional sensors attached to networked computers, thus confirming the practicality of the proposed methods and applications.