The objective of this project is to devise computer vision methods that enable a Portable Blind Navigational Device (PBND) to guide a visually impaired person in unstructured environments. The main research question of this project is to answer if the approach of employing a single perception sensor can solve blind navigation problem, including localization of the PBND and object recognition. A distinctive feature of this work is that it addresses blind navigation problem by simultaneously processing the visual and range information of a 3D imaging sensor.
The project consists of four related research endeavors. First, it investigates techniques for accurate and precise pose estimation of the PBND in a GPS-denied environment. Second, it develops an effective 3D data segmentation method to allow scene recognition for wayfinding. Third, it applies the pose estimation method to register 3D range data and devises methods to reduce registration error. Four, it addresses real-time implementation of the methods in the PBND with limited computing power.
The research will result in new algorithms that can improve the lives of the visually impaired in the near term. They will also enable the autonomy of small robots that have wider applications in military situational awareness, firefighting, and search and rescue. The discoveries will revolutionize small robot autonomy and impact the robotics research community as a whole. Broader impacts also include training of undergraduate and graduate students, and educating the public on robotics through workshops and robot exhibits in science museums and technology showcases.