A fundamental challenge in computer graphics is to create interactive virtual environments that accurately depict the complex natural scenes of the real world. These virtual environments are vital for a wide variety of applications, including e-commerce, education, industrial design and architectural planning, games and movies, safety analysis and virtual training, and cultural heritage. Realistically simulating the visual appearance of the real world is extremely challenging because scenes of interest have complex geometry, material, and lighting interacting across a wide range of physical scales, ranging from millimeter-sized surface bumps to large-scale structure. We call such scenes scale-complex. Current rendering methods are blind to scale, making it infeasible to realistically simulate the complex paths along which light reflects and scatters in such scale-complex scenes. This project develops a novel framework for realistically rendering images of scale-complex scenes. Importantly, the framework supports rich illumination phenomena and rendering effects such as indirect illumination, participating media, subsurface scattering, motion blur, and depth-of-field.

For the proposed framework to be scalable, it must perform well even with growing complexity of the scene and of simulated illumination phenomena. This project explores the following new approaches: (a) a unified treatment of all illumination phenomena and rendering effects, (b) novel multiresolution representations coupled with perceptual metrics based on early vision and higher level vision to eliminate computation where it is not visually important, (c) new methods for accurately computing illumination detail as needed, with illumination-driven simplification of geometry and material, and (d) new hybrid CPU/GPU algorithms for interactive performance.

Project Report

An image of a scene conveys critical visual information about the objects in the scene (e.g., their shape and material), and the lighting and context of the scene. Many applications depend critically on such images to predict visual appearance. For example, in industrial design and architectural design, it is useful to visualize how objects and buildings look before they are constructed at full scale and considerable cost. This capability can encourage creative design, and let designers and users to achieve the appearance they desire. These applications share in common their need for a very high standard of visual fidelity; they must accurately predict exactly how objects will truly appear in the real world before they are ever built or made. Other application areas that require this level of visual fidelity include virtual cultural heritage, virtual training, and ecommerce. Achieving realism requires (1) detailed models of the shape and material of each object, and (2) accurate simulation of how light interacts with these objects and materials, and then propagates in the scene. Realistic rendering of such scenes is a grand challenge because accurately simulating the physics of light is prohibitively expensive. Our project builds on a key insight to address this challenge: we use knowledge of how human beings perceive the world to focus computational effort only where it contributes to an image, and eliminate computation where the result will not be perceived by a human observer. Impact. This new way of coupling perceptual knowledge with computational algorithms has changed how graphics algorithms are designed today. The impact of this work has been significant. Our scalable rendering technology, lightcuts, has been adopted by industry. Autodesk, the market leader in design software, uses Lightcuts (our research) as the core rendering engine of major design and modeling software products that are commercially available. Millions of images (and growing) have been rendered by users and designers using our software technology for pre-visualization. Over the course of this project we have pursued three major directions of research. 1. Micron resolution representations of detailed shape. We have developed techniques to construct micron resolution models of everyday materials like cloth. This work leverages technologies like CT imaging to acquire the geometric details that are crucial to making materials like silk and velvet achieve their characteristic visual appearance. 2. Scaling to complex scenes. Rendering images of models that can range from micron-resolution detail to city-size geometry requires computational algorithms that can scale to this range of sizes. The key technology developed in this project is Lightcuts, a scalable rendering algorithm that uses knowledge of human perception to drive efficient image generation rendering algorithms. Lightcuts supports complex effects like motion blur, volumetric rendering, and depth-of-field. While these complex effects increase the cost of rendering, they often decrease the perceptual salience of scene features. Our key insight was to use inexpensive lighting approximations aggressively where perceptual salience is reduced. Thus, lightcuts simulates the physics of light using multi-scale approximations that are perceptually accurate, but can scale from micron resolution to city scale. Prior perceptual methods cannot predict when such approximations are possible without doing the very work of rendering the image for comparison. We show how to select approximations by predicting and bounding perceptual error. The result is a huge reduction in computational cost: 3--6 orders of magnitude. This algorithm has been widely adopted in industry. 3. Perceptual knowledge. The insight that drives our work is that an understanding of human perception can be used to design algorithms that automatically focus computation where it impacts perceived image fidelity. Therefore, we pursued two complementary goals. First, we wanted to understand image fidelity: when is an image good enough? To this end, we introduced a new appearance-based measure of image fidelity, visual equivalence. Second, using this new fidelity measure, we designed scalable algorithms that faithfully and efficiently approximate lighting in complex scenes. Summary. This project has developed a new scalable, appearance-preserving approach to graphics that has become an industry standard for interactive, realistic rendering of complex virtual worlds. It has become a powerful tool for research, design, commerce and education.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0644175
Program Officer
Lawrence Rosenblum
Project Start
Project End
Budget Start
2007-02-01
Budget End
2013-01-31
Support Year
Fiscal Year
2006
Total Cost
$456,625
Indirect Cost
Name
Cornell University
Department
Type
DUNS #
City
Ithaca
State
NY
Country
United States
Zip Code
14850