This project improves a computer's ability to interpret the shape and material of objects from visual sensors. The research hypothesizes that full 3D object shape can be estimated by matching visual features from an observed object to an object of known shape from a dataset, transferring the known shape, and deforming the 3D shape to better account for spatial correspondences of matched features. The research represents materials at multiple scales, separately encoding little bumps and grooves from the patterns of material categories. Because image properties arise from the combination of shape, material, and illumination, the research also involves developing algorithms to jointly estimate. The developed technologies can be applied to automated systems, personal and industrial robotics, surveillance and security, transportation, image retrieval, image editing and manipulation, and content creation. The project contributes to education through student projects, course development, and workshops and tutorials involving a broader audience.

The research investigates improved representations of 3D shape and material and methods to recover them from one image. Rather than aiming for veridical models, such as precise surface normals or BRDF parameters, the research team recovers approximate models that are useful for object recognition, content creation, and other tasks. The work on 3D object shape focuses on labeling object boundaries as occlusions, folds, or texture/albedo and using these boundaries as part of a data-driven approach to recover full 3D models of the objects. The research involves studying methods to recover rich, multiscale representations of the materials that compose objects. These methods exploit approximate shape representations and approximate representations of the illumination to recover estimates of radiometric properties of the object at a point. The algorithms build maps of these material properties to model spatial variation in albedo and complex phenomena like veins in marble. The research also involves extending these methods to report spatially varying normal maps that capture shape textures like the bark of trees. Finally, the research investigates how to incorporate image-centered maps to capture more random, spatially localized phenomena like the pits in orange peel.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1421521
Program Officer
Jie Yang
Project Start
Project End
Budget Start
2014-08-01
Budget End
2019-07-31
Support Year
Fiscal Year
2014
Total Cost
$476,569
Indirect Cost
Name
University of Illinois Urbana-Champaign
Department
Type
DUNS #
City
Champaign
State
IL
Country
United States
Zip Code
61820