Real-time rendering with complex illumination, realistic materials, and an accurate simulation of light scattering, with the ability to manipulate parameters interactively, is the challenge addressed in this project. Fundamental questions about the representation of appearance are explored, resulting in the development of a new class of real-time algorithms. Specifically, the focus is on three main directions: (i) Mathematical and Empirical Analysis of the complexity of appearance representations and light transport; (ii) Computational Techniques such as fine-to coarse and coarse-to-fine algorithms, function dictionaries, and hierarchical principal component analysis; and (iii) Algorithms and Systems supporting interactive manipulation of illumination, materials, and viewpoint.

Computer graphics and scientific visualization is increasingly used in areas such as simulation and training, automobile design, architectural renderings, defense, electronic commerce, and entertainment. With today's technology, it is possible to create strikingly realistic images using sophisticated, but also slow rendering algorithms. These algorithms cannot address the growing need for effective methods to manipulate and interact with 3D environments. Modern PCs can render scenes with millions of polygons at interactive rates, but this performance comes at the cost of a considerable loss in quality and realism. This project seeks to overcome these limitations: producing highly realistic images at interactive rates.

Over the last thirty years computer graphics research has created technology that improves both the realism and generation speed of computer-generated images. However, there is no widely used method for creating realistic images, and it remains difficult in practice to create realistic images. This project aims to develop a practical method for realistic image creation that addresses the visual effects believed to be salient by the graphics and perception communities. The method is intended to be robust, relatively simple, and to have the potential to be interactive in the next decade.

The ability to rapidly generate realistic images is useful in many applications. For example, flight and driving simulators rely on realistic imagery, but the interactive nature of these systems limits the realism that is feasible. The proposed work should extend these limits. The proposed method would also be useful for generating images for planning and education; images could be created to help visualize the results of reconstruction efforts, habitat change, and urban planning. Finally the method should be useful for generating synthetic images for input to automated vehicles, where many different scenes and atmospheric conditions could be generated rapidly and at low expense.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0305322
Program Officer
Lawrence Rosenblum
Project Start
Project End
Budget Start
2003-12-15
Budget End
2007-11-30
Support Year
Fiscal Year
2003
Total Cost
$224,734
Indirect Cost
Name
Columbia University
Department
Type
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10027