"This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5)."
Computer vision research and related technology are on the cusp to be taken out of the lab to meet real-world challenges: understanding, reasoning, and navigation in large-scale, dynamic, and complex environments.
This project will acquire a state-of-the-art, custom sensor suite, the 3D Content Digitization Suite (3DCDS), to support research and performance evaluation of computer vision and robotics algorithms in challenging real-world, outdoors, dynamic scenes: 2D-3D fusion for real-time geometry processing; and large-scale scene understanding and hierarchical semantic context inference.
3DCDS includes high-end panoramic laser range sensors, panoramic cameras, navigation equipment, and off-the-shelf cameras arranged on a custom made reconfigurable platform which is mounted on a vehicle. 3DCDS enables testing scene understanding and awareness algorithms under various conditions and it facilitates the collection of ground truth data. Trade-offs across different sensing platforms will be evaluated and the extensive dataset will serve as benchmark for a broad range of algorithms.
All data and benchmarks will be made publically available. Stevens will host a web portal for researchers to submit the results of their algorithms on the benchmarks.
Research agendas that can be supported by this infrastructure and the collected data span multiple domains and include: fusion of range and image data for inferring accurate, high-resolution, hierarchical scene segmentation; object recognition and scene understanding using invariant features combining geometry and appearance; real-time, dynamic scene awareness; comprehensive evaluation of sensors and methods on ground truth data captured in uncontrolled environments.