The goal of this research is to develop appearance-based representations of three-dimensional scenes that permit images corresponding to a continuous range of views of the scene to be synthesized by adaptively combining a set of basis images. These steerable appearance models analytically model the effects of a moving camera on scene appearance. The approach is based on decomposing scene appearance into three separate components: epipolar motion, visibility, and photometric properties, which together determine how the scene will appear from arbitrary viewpoints. Each component is represented and steered independently before being combined to produce views of the scene. The components are directly computable from the basis images under certain conditions, avoiding traditional difficulties caused by ill-posed problems and complex 3-D scene recovery tasks. The research centers around the dual problems of measurability and steerability, which provide a formal basis for designing and evaluating the proposed models. Measurability defines the conditions under which it is possible to recover a representation from a set of basis images. Steerability specifies the range of views that may be reconstructed as well as the mechanism for producing them. Results should provide a better theoretical understanding of the dynamic effects of viewpoint on scene appearance. Potential applications of steerable appearance models will be investigated, such as three dimensional object recognition based on viewpoint-dependent grayscale appearance, and visual map construction of unknown three-dimensional environments by active exploration.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
9530985
Program Officer
Jing Xiao
Project Start
Project End
Budget Start
1996-06-01
Budget End
2000-12-31
Support Year
Fiscal Year
1995
Total Cost
$275,883
Indirect Cost
Name
University of Wisconsin Madison
Department
Type
DUNS #
City
Madison
State
WI
Country
United States
Zip Code
53715