For many applications, the nature of visual content is rapidly changing. Two-dimensional images are being replaced by higher-dimensional appearance models that summarize the (very large)set of images of an object for many viewpoints and lighting conditions. These models enable an object to be virtually rotated, re-lit and composited into other scenes, and they improve our ability to automate visual tasks such as detection, recognition, and tracking. Yet, despite the growing number of applications, suitable appearance models of common objects are remarkably hard to find. This is due largely to the complexity of appearance, which makes the task of appearance capture quite formidable. The goal of the proposed research is to develop a framework for appearance capture that can be applied to any opaque and non-refracting surface, and that is simultaneously accurate and practical. Ultimately, we seek to enable "ubiquitous appearance capture": the widespread ability to acquire appearance models outside of the laboratory - in homes, offices, museums, hospitals and in the field. To achieve this goal, we take a physics-based approach that seeks to exploit common reflectance properties (e.g., Helmholtz reciprocity, reflectance separability and compressibility) that we believe have yet to be fully utilized. The benefits of this approach are two-fold. First, it enables image-based acquisition of both shape and reflectance from a small number of uncalibrated cameras and light-sources, eliminating the need for laser range-scanners, projector-based structured lighting, or other specialized equipment. Second, it makes possible the modeling of a broad class of surfaces, including those with complex reflectance not necessarily well-represented by low-dimensional (i.e., parametric)models. This second property is essential for modeling real-world surfaces, and is very different from most existing image-based approaches that are predicated on restrictive assumptions about the nature of surface reflectance. This research activity is closely linked to an educational program that includes the development of courses in human and computer vision at both the undergraduate and graduate levels, and the creation of undergraduate and graduate research opportunities. The educational program extends beyond the university by making a mobile acquisition system available as a teaching tool in museums and classrooms in the Boston area, and by making appearance models and software available through the Internet.

URL: www.eecs.harvard.edu/~zickler/research.html

Project Report

We developed efficient acquisition strategies, based on mathematical and computational foundations, for measuring three-dimensional shape and material information from digital photographs. In addition to providing the statistical and structural information about the visual world that is required for the success of future computer vision systems, these strategies are useful for digitally "capturing appearance" to enhance physical realism in computer graphics applications, including virtual and augmented reality. Our contributions can be divided into two categories: 1. Active shape and reflectance capture. In many cases, we have the ability to manipulate scene lighting while capturing images, and this provides additional constraints that can be used to recover shape and material information. For accurate results, reflectance must be obtained along with shape from a single set of images. We have shown that certain imaging configurations allow one to decouple shape and reflectance information in image data, so that each can be inferred without making assumptions about the other. This includes methods that exploit isotropy and reciprocity (Zickler, 2006; Tan et al., 2011), tangent plane symmetries (Holroyd et al., 2008), and reciprocal images with structured lighting (Holroyd et al., 2010). Separately, we developed methods for inferring reflectance and shape at the same time, while relaxing the restrictions on both as much as possible (Alldrin et al., 2008; Sunkavalli et al., 2010). A general overview of the current state-of-the-art can be found in our tutorial and survey (Weyrich et al., 2008). 2. Passive shape and reflectance capture. A separate and complimentary approach is to learn about the world's shape, materials, and lighting by exploiting passive observations of objects and scenes under naturally-occurring illumination. Among other things, this would enable the automatic learning of shape and material information from the billions of images and videos being shared online. Along these lines, we have developed mathematical and computational tools for inferring an object's intrinsic color (Chong et al., 2007; Sunkavalli et al., 2008; Owens et al., 2011), inferring glossiness and other reflectance properties (Mallick et al., 2006; Romeiro et al., 2008; Romeiro and Zickler, 2010), and recovering shape information from images of curved mirrors (Adato et al., 2007).

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0546408
Program Officer
Jie Yang
Project Start
Project End
Budget Start
2006-02-01
Budget End
2012-01-31
Support Year
Fiscal Year
2005
Total Cost
$457,372
Indirect Cost
Name
Harvard University
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02138