As a monocular observer moves in a 3D world, the parts which can be seen and their geometry change with the observer's vantage point. This project undertakes to quantitatively describe and model the relationships between motion and the induced change in the appearance of 3D shape. Because motion is such an integral part of this information gathering and processing process, the investigators are developing a uniform spatiotemporal approach to 3D object representation and intermediate-level motion description for dynamic model-based 3D object recognition. To model explicitly the dynamic appearance of objects as the viewpoint moves, the approach uses "aspect space" which is the cross product of the image plane and viewpoint space. New 3D object representations are being developed in this multidimensional space, including the asp and the rim appearance representation. These representations describe how visible, geometric features of projected shape can change over time due to egomotion or object motion. Complementing this work on dynamic viewer-centered representations, the research will also derive intermediate-level motion descriptions from the motion of features in spatiotemporal image volumes prior to object recognition and description. Methods will be developed for intermediate-level motion description including spatiotemporal surface flow and flow line detection, segmentation of spatiotemporal surfaces, and the detection of properties such as cyclic motion and T-junction motion.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
9022608
Program Officer
Howard Moraff
Project Start
Project End
Budget Start
1991-05-15
Budget End
1994-04-30
Support Year
Fiscal Year
1990
Total Cost
$241,727
Indirect Cost
Name
University of Wisconsin Madison
Department
Type
DUNS #
City
Madison
State
WI
Country
United States
Zip Code
53715