This research investigates a way to efficiently produce realistic computer graphics images. Applications of computer graphics are placing an ever-increasing demand on the ability to produce accurate, convincing images of complex scenes. Examples of such applications include architectural and engineering design, virtual training, telecollaboration, and game and movie rendering. However, much of the scene complexity is usually irrelevant to how humans perceive the rendered image. The approach explored by this research is to exploit the limitations of the human visual system by automatically focusing computational effort on the visual features that are important for convincing the eye, while saving time where the eye would be insensitive to the difference. This research develops new, feature-based computer graphics rendering techniques that more efficiently handle the large, complex, realistic scenes needed by future applications.
The goal of the research is a scalable, feature-based graphics pipeline that exposes features explicitly at every level from modeling to the final rendered image. If computational work is proportional to visual features, the computational cost is proportional to the intrinsic visual complexity of the output image rather than to other measures of scene complexity such as polygon count. Such a graphics pipeline is fundamentally more scalable. The researchers are investigating efficient algorithms for finding visually important features, new feature-based scene and display representations for high-quality modeling and display, and new rendering algorithms that exploit features to provide scalable, efficient, high-quality image synthesis.