This research will advance the state of the art in three-dimensional (3D) shape synthesis by developing generative probabilistic models that will enable the automatic understanding of semantics from shape geometry, and which will lead to the development of new computational modeling algorithms that allow anybody to easily create compelling and highly detailed 3D content. Users will be able to create shapes by simply providing high-level specifications, shape types, parts, semantic shape attributes, landmark points, or sketches based on simple and intuitive user interfaces. These models will also enable the computer to infer complete geometry from partial geometric data acquired by range cameras and to fill in any missing shape parts, or to robustly recognize objects in a scene acquired by 3D sensors. Project outcomes will advance the state of the art in 3D modeling, providing users with intuitive tools that significantly lower the barrier of rapid and easy creation of detailed shapes. Such tools are becoming increasingly important, since there is a growing interest in 3D models in scientific and engineering fields such as collaborative virtual environments, augmented reality, simulation, computer-aided design, and architecture. In particular, this work will significantly benefit 3D printing; where despite hardware advances, the main bottleneck remains the creation of shapes to be supplied to the printer. The research will also advance the state of the art in shape understanding and object recognition, which are important for computer vision and robotics applications.

The key idea behind these generative models is that they represent complex hierarchical compositions, correlations and variations of detailed geometric shape features, as well as their relationships with high-level semantic shape attributes. The models will be automatically learned from large shape repositories available on the Web, after the input shapes are pre-processed by new algorithms the Principal Investigator will develop for simultaneous shape segmentation and landmark localization so that their parts and points are consistently labeled. Existing shape synthesis algorithms are limited to re-use and re-combine shape parts from a repository, or synthesize shapes in specific classes (such as human bodies or faces), with limited geometric variability and no structural or semantic variability. The Principal Investigator's generative models, on the other hand, will instead learn how to densely place points and patches to create new plausible shapes in complex domains, such as furniture, vehicles, tools, creatures, etc. Inference algorithms built upon the generative models will be able to synthesize shapes given linguistic terms or sparse geometric input. As a result, the research will lead to the development of new 3D content creation tools that will transform the field of computational modeling: instead of executing a series of painstaking low-level geometric editing and manipulation commands, users will perform simple, easy, and intuitive interactions to achieve their design goals.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1422441
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2014-07-01
Budget End
2018-06-30
Support Year
Fiscal Year
2014
Total Cost
$499,997
Indirect Cost
Name
University of Massachusetts Amherst
Department
Type
DUNS #
City
Hadley
State
MA
Country
United States
Zip Code
01035