Realistic rendering systems can compute physically accurate images of a wide range of materials. However, some of today's most important challenges in rendering, particularly rendering human beings, involve densely packed elements -- the hairs on a head, the threads in a shirt -- that are readily visible at ordinary distances. Because the geometry is densely packed together, accuracy requires a full simulation of how light reflects from hair to hair, or from thread to thread, before it gets to the eye. However, these materials are difficult to work with: the parts are too numerous for a detailed simulation to be practical, but they are not small enough to be modeled as a continuous medium. This research is exploring methods that can use both approaches at once without sacrificing accuracy.
Correctly rendering these dense aggregations is an important open problem in computer graphics, requiring fundamental theoretical and algorithmic advances to be solved. Directly simulating multiple scattering on such intricate models is prohibitively costly, so in practice it is approximated by simple heuristics. However, for light-colored materials such as blond hair or white cotton, very little light is absorbed in a single interaction, so the visual appearance of these materials is significantly affected by multiple scattering. For correct results a complete scattering simulation is required.
This research project aims to efficiently simulate scattering in dense geometry by introducing an approximation of the whole aggregation as a continuous scattering medium. This allows methods for volumetric light transport to be applied when possible while still using the detailed geometry when necessary. Constructing this approximation requires a fundamentally new understanding of the relationship between geometric and volumetric rendering. Implementing the approximation will require new algorithms to be developed, and applying it will require improvements to the state of the art in volumetric rendering.