Wojciech Matusik, MIT, and Hanspeter Pfister, Harvard University

Novel and innovative digital output devices, such as stereoscopic TVs, passive (e-Ink) displays, and 3D printers, are entering the mass market. They are rapidly improving in quality and decreasing in price. This trend empowers users to consume and produce digital media like never before. However, while there has been tremendous progress in the hardware development of these output devices, the provided digital content creation software, algorithms, and tools are largely underdeveloped. For example, creating a 3D hardcopy of an animated computer graphics character is well beyond the reach of consumers, and to approximate the character's appearance and deformation behavior using multi-material 3D printers is difficult or perhaps even impossible for professionals. The main issues are a lack of accurate previews of how the output will look like, a lack of standardization between devices with similar capabilities, and a lack of accurate conversion tools and algorithms to go from the virtual (i.e., the computer model) to the real (i.e., the physical output).

This research involves the development of a complete process and software framework that allows moving from abstract computer models to their physical counterparts efficiently and accurately. Designing this process is posing the following fundamental computational challenges: (1) accurate and efficient simulation methods that can predict the properties and behavior of an output without physically generating it; (2) efficient methods to compute an output gamut that describe physically-realizable outputs for a given device; (3) general gamut mapping algorithms that convert abstract computer models to realizable points in the device gamut; and (4) accurate perceptual metrics that allow comparing different output elements during the gamut mapping algorithm. This research is focusing on two emerging classes of important output devices: multi-view auto-stereoscopic displays and multi-material 3D printers. The research is creating a complete and general software architecture that will support both existing and future output devices.

Project Report

Novel output devices, such as stereoscopic 3D TVs and 3D printers, are entering the mass market. They are rapidly improving in quality and decreasing in price. This trend empowers users to consume and produce digital media like never before. However, while there has been tremendous progress in the hardware development of these output devices, the provided digital content creation software, algorithms, and tools are largely underdeveloped. The main issues are a lack of accurate previews of how the output will look like, a lack of standardization between devices with similar capabilities, and a lack of accurate conversion tools and algorithms to go from the virtual (i.e., the computer model) to the real (i.e., the physical output). The overall situation is analogous to the digital printing and content creation revolution of the early 1980s before the advent of PostScript. This research has focused on two emerging classes of important output devices: multi-material 3D printers and multi-view auto-stereoscopic 3D displays. The research has produced complete and general software architectures that allow moving from abstract computer models to accurate device outputs. In the context of multi-material 3D printers, this research has produced OpenFab - a direct specification pipeline for multi-material fabrication - inspired by the programmable graphics pipelines used for film and real-time rendering. This architecture handles 3D printing of multi-material objects with arbitrary complexity. As an alternative to directly specifying material composition, it is often more natural to specify an object by defining its functional goal (e.g., specific color, deformation behavior). In order to solve this task, this research has also yielded Spec2Fab - a computationally efficient and general process for translating functional requirements to fabricable 3D prints. Spec2Fab provides an abstraction mechanism that simplifies the design, development, implementation, and reuse of fabrication algorithms. In the context of stereoscopic 3D, this research has addressed the problem of content generation for multi-view autostereoscopic displays – new types of displays that allow for 3D experience without wearing special glasses. This research has produced efficient computer processes that allow conversion of standard stereoscopic 3D content to the content required by multi-view 3D displays. Furthermore, this research has developed novel models for stereoscopic perception. These models are used to adapt the stereoscopic content to a given 3D display and improve the quality of the resulting 3D experience.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1116296
Program Officer
Lawrence Rosenblum
Project Start
Project End
Budget Start
2011-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2011
Total Cost
$257,998
Indirect Cost
Name
Massachusetts Institute of Technology
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02139