Navigating the world from visual input alone is still one of the greatest challenges for mobile robots. Vision provides the richest but also most difficult input, since unambiguous and stable features that can serve as landmarks for navigation are hard to obtain. Challenges include the rich geometry of the world, changing scenes and obstacles, occlusion, lighting changes, specular and transparent surfaces, to name a few. Maintaining an accurate 3D model of world features and using them for localization and navigation is difficult due to the sheer amount of data to be processed.

To face these challenges, this project presents a framework for navigation from one-dimensional circular feature sequences extracted from 2D panoramic images. Since many real-world navigation tasks of mobile entities are in the plane (e.g., walking in a single building floor, driving in the city), the goal is to identify a reduced feature set sufficient for navigation that uses little storage space and can be processed quickly.

The key motivation for this work is to develop a simplified visual feature model that allows robust navigation in the plane but avoids most of the difficulties associated with extracting, modeling, and matching true 3D features and solving the associated six degree-of- freedom structure-from-motion problem. This work advances the state of the art in vision-based navigation on several fronts, with novel contributions in both robotics and computer vision. Specific algorithmic contributions include new feature descriptors invariant to planar motion, fast circular feature matching algorithms that allow both unmatched features and ordering reversals, probabilistic topological motion planning methods that include explicit modeling of the reliability of feature detection and identification, and strategies for graph-based world modeling and selective feature storage.

The proposed techniques have wide applicability, ranging from autonomous robot navigation in unknown environments to camera-based navigation aides for pedestrians, bicyclists, and cars. The project also provides an opportunity to expose undergraduates at a liberal- arts college to the world of research, experimentation, and discovery.

Undergraduate students at Middlebury College - an undergraduate institution in rural Vermont - will be actively involved in all components of this research. The student researchers will also join the PIs in authoring papers and attending conferences. Other students will benefit from the research activities through the integration of current research topics into the curriculum and through use of the lab facilities enhanced by the project. Tight integration of research and education is a central career goal of the PIs. Based on experience gained in prior and current collaboration with undergraduates, robotics and computer vision research is ideally suited to excite and challenge students.

Progress on this project will be regularly reported at http:// vision.middlebury.edu/navigation

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0713442
Program Officer
Jie Yang
Project Start
Project End
Budget Start
2007-08-01
Budget End
2012-07-31
Support Year
Fiscal Year
2007
Total Cost
$320,000
Indirect Cost
Name
Middlebury College
Department
Type
DUNS #
City
Middlebury
State
VT
Country
United States
Zip Code
05753