The ability of computers to unify visual information from multiple imaging modes into comprehensible illustrations will revolutionize the ability of scientists, engineers, and humanities scholars to gain and communicate knowledge about the visual world. Achieving this goal, however, will require a joint focus on developing novel shape and image analysis methods, and designing collaborative user interfaces that allow multiple domain experts and illustrators to bring together their expertise. The Collaborative Algorithmic Rendering Engine (CARE) will be an open-source tool for extracting and merging visual details available only under certain lighting conditions, certain wavelengths, or certain imaging modalities. By focusing on minimal user effort, cross-site collaborative visualization design, and integrated archiving and process history (provenance) tracking, the CARE tool is specifically designed to remove existing obstacles to widespread adoption of digital tools for visual analysis and communication.

As part of the project, investigators are developing novel image analysis techniques that build upon existing technologies such as Reflectance Transformation Imaging (RTI) and non-photorealistic rendering using images with normals (RGBN NPR), which have already received enormous interest within the cultural heritage community. The research includes methods for: (1) analyzing the collection of images to decompose them into "maps" of color, orientation, and material at each pixel; (2) performing an arbitrary sequence or combination of image-processing operations on some or all of the maps separately; and (3) combining several maps into the final illustration. The whole process is driven by (4) a user interface designed for interactive response and including special features that enable collaborative illustration design.

The project involves a close collaboration between a university-based research group, responsible for development of new technologies, and a non-profit company with a demonstrated track record of working with museums and archaeological sites to deploy novel imaging and computational photography systems. This joint development will ensure that the underlying technologies will have immediate high impact in the field: cultural heritage scholars and scientists will be able to generate high-quality, comprehensible illustrations for scientific papers and textbooks, with control over selective emphasis, contrast, attention, and abstraction, at lower cost and greater flexibility than generating such figures by hand. The subject matter of art history also offers the unique opportunity to stimulate the interest of students who would not normally take courses in computer science, broadening the class of students exposed to the tools and capabilities of computing.

Project Report

High-resolution (digital) photographs remain one of the primary ways of documenting artworks, archaeological objects, and items of significant value to our cultural heritage. Although a lot of information can be conveyed through an individual photo, a collection of images, each taken with the light coming from a different direction, can lead to even easier and more unambiguous understanding of the shape and materials of the object being studied. Just switching between the photos is often good enough, but compelling visualizations can be created by using computers to combine the information present in these photos (or other collections of photos of the same object, taken with different color filters, or using different imaging techniques such as infrared). For example, we can create a visualization that takes the detail visible in one part of the object in one photo, and combines it automatically with a part of another photo that reveals more detail in another part of the object being imaged. The aim of this project is to build a set of computer programs that would assist art historians and conservators in better understanding the objects they study, by creating the kinds of visualizations described above. In addition, in order to be useful to scholars in the future, these visualizations must be accompanied by provenance: a detailed record of exactly how the images were taken, and what steps were used by the computer to manipulate and combine them into the final visualizations. This is very different from common software such as Photoshop, which allows people to manipulate images but makes it hard to know and reproduce exactly how an input image was transformed into an output. The project was undertaken by a collaborative team of researchers and students at Princeton University, together with the employees of Cultural Heritage Imaging, Inc. (CHI), a non-profit company dedicated to the promotion of digital imaging tools among cultural heritage professionals at the top museums, libraries, and schools in the world. As part of the project we built three computer tools. The first records information about photos as they are taken, beginning the process of building up a "digital lab notebook". The second makes sure the images are precisely aligned, correcting for any camera shake or inadvertent motion of the object being photographed. Finally, the third tool allows the user to experiment with different methods of manipulating and combining images to produce compelling and informative visualizations. Of course, all along the programs are keeping track of the exact ways in which the users are manipulating the images, and saving this information for posterity. The specific technical contributions of this work include the methods for precisely aligning images, as well as methods for estimating information about the shape (surface normals) of the object being imaged. Another major contribution is the standardization of the "digital lab notebook" that keeps track, in a very precise and well-defined format, exactly how the images were taken, transformed, and combined. The broader impacts of the work include the many multi-day training sessions run by CHI, during which the digital tools we are developing were introduced to conservators, art historians, and students. The tools we developed are in the process of being released freely to the public (except for the last one, which is still under development as of 2014), and will promote the use of digital imaging technologies among cultural heritage scholars.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1027962
Program Officer
Sylvia Spengler
Project Start
Project End
Budget Start
2010-09-15
Budget End
2013-08-31
Support Year
Fiscal Year
2010
Total Cost
$555,000
Indirect Cost
Name
Princeton University
Department
Type
DUNS #
City
Princeton
State
NJ
Country
United States
Zip Code
08544