A broad range of problems in computer graphics rendering, appearance acquisition, and imaging, involve sampling, reconstruction, and integration of high-dimensional (4D-8D) signals. Real-time rendering of glossy materials and intricate lighting effects like caustics, for example, can require pre-computing the response of the scene to different light and viewing directions, which is often a 6D dataset. Similarly, image-based appearance acquisition of facial details, car paint, or glazed wood requires us to take images from different light and view directions. Even offline rendering of visual effects like motion blur from a fast-moving car, or depth of field, involves high-dimensional sampling across time and lens aperture. The same problems are also common in computational imaging applications such as light field cameras. While the PIs and others have made significant progress in subsequent analysis and compact representation for some of these problems, the initial full dataset must almost always still be acquired or computed by brute force which is prohibitively expensive, taking hours to days of computation and acquisition time, as well as being a challenge for memory usage and storage.

The PIs' goal in this project is to make fundamental contributions that enable dramatically sparser sampling and reconstruction of these signals, before the full dataset is acquired or simulated. The key idea is to exploit the structure of the data that often lies in lower-frequency, sparse, or low-dimensional spaces. Their recent collaboration on a Fourier analysis of motion blur has shown that the frequency spectrum of dynamic scenes is sheared into a narrow wedge in the space-time domain. This enables novel sheared (not axis-aligned) filters and a sparse sampling. The PIs will build upon these preliminary results to develop a unified framework for frequency analysis and sparse data reconstruction of visual appearance in computer graphics. To these ends, they will first lay the theoretical foundations, including a novel frequency analysis of Monte Carlo integration and 5D space-time analysis of light fields. They will then develop efficient practical algorithms for a variety of problem domains, including sparse reconstruction of light transport matrices for relighting, sheared sampling and denoising for offline shadow rendering, time-coherent compressive sampling for appearance acquisition, and new approaches to computational photography and imaging.

Broader Impacts: From a theoretical perspective, this project will develop a fundamental signal-processing analysis of light transport and appearance and imaging datasets, which will provide the foundation for further work not just in computer graphics but in signal-processing, computer vision, and image analysis as well. Project outcomes will apply to diverse sets of problems and will lead to transformative advances across the spectrum of rendering and imaging applications. The PIs will leverage existing collaborations with industry to transition the new technologies to practical production use. Outreach to K-12 students and the public will be enabled by a new science popularization blog that will leverage the public's excitement for advances in digital photography to introduce novel technical concepts, as well as by events such as the Computer Science Education Day for high school students at UC-Berkeley. The new algorithms and datasets resulting from this work will be made available to the research community; moreover, imaging algorithms will be released in open-source format to work with consumer digital and cell-phone cameras.

Project Report

3D rendering and photography deal with complex functions that describe the color of each light ray in a 3D scene. While the set of light rays in a scene is extremely large, this work is based on the premise that the actual complexity of the lighting function is not as high as one could fear. We say that it is *sparse*. We developed techniques that leverage this sparsity to dramatically reduce the cost of simulating physical lighting in a scene or the number of samples that must be acquired for light-field photography. Most of our work characterizes the sparsity of radiance in the Fourier domain, which represents functions as sums of sine waves of different ferquencies. We carried out derivations based on the physics of light transport that show that the spectrum of local light fields is expected to be sparse. We used these insights to develop techniques that use a smaller number of samples (either measured sample in a real scene or simulated samples in a 3D rendering.) We made sure that our techniques can be efficiently implemented, in particular by focusing on so-called axis-aligned filtering, which allows us to quickly reconstruct final image values. A major finding of this work is that whereas most computer graphics and computational photography techniques operate in the discrete domain, sparsity is most relevant in the continuous domain. We showed the impact of discretization on sparsity and developed techniques that can operate in the discrete domain while leveraging sparsity in the continuous domain. This broad will have broader impact because sparsity applies to many functions and signals beyond computer graphics. We hope that our filtering and reconstruction techniques will be extended to handle other signals. Our approaches to efficient sampling and reconstruction can impact all applications of rendering, such as architecture, entertainment, virutal sales, simulation, training. They can also have a broad impact on digital photography by significantly lowering the cost of light field capture.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1116303
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2011-10-01
Budget End
2014-09-30
Support Year
Fiscal Year
2011
Total Cost
$250,000
Indirect Cost
Name
Massachusetts Institute of Technology
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02139