This Small Business Innovation Research (SBIR) Phase I project focuses on reducing the footprint of machine vision and human interface solutions in support of a wearable apparatus that will improve environmental awareness of the visually impaired. Computer vision has provided significant capabilities in the robotics domain including object tracking, facial recognition, environmental localization, and hazard detection. In the past, computer vision sensor/software systems have been slow, bulky, and power-hungry. Recent advances in imaging hardware and embedded processing now provide an opportunity to shrink vision systems, including stereo vision and other complex operations, to a size that would allow them to be embedded in a wearable apparatus, similar to wraparound sunglasses. Using an auditory signal to feedback environmental information to the wearer, this device will provide valuable, and previously unimaginable, visual sensing capabilities to the visually impaired. These include: 1) determining distance traveled even in GPS-denied environments; 2) detecting and classifying obstacles, drop-offs, overhangs, and other nearby hazards; and 3) detecting the presence and relative location of nearby people.

The broader impact/commercial potential of this project will be a significant breakthrough in the compact combination of computer vision and human interface technologies. The technology developed in this proposal has considerable impact for the visually impaired and strong commercial potential. The needs of the visually impaired are not being met by existing technology. The proposed technology will increase the independence of the visually impaired and improve their quality of life, especially with respect to social interaction. The technology, produced initially to help the visually impaired, has the potential for a much broader scientific and commercial impact. Commercial potential for this product includes robotics, military ground forces, augmented reality, and surveillance. The augmented reality market has particular broad impact beyond the visually impaired. First responders such as fire fighters and police officers can receive additional information via a computer vision prosthetic that enhances their existing perception. Additionally, there exists a strong demand by human interface researchers for this technology in a commercially available device.

Project Report

As processors, cameras, and displays become smaller and more energy efficient, people increasingly rely on mobile computers for organization, entertainment, and communication in their daily lives.These technologies also have the ability to change the daily lives for populations with low-vision, hearing loss, or cognitive disabilities.To that end, TRACLabs is developing the WeaRable Augmented PercePtion for Environmental Recognition system (WRAPPER for short). WRAPPER is a revolutionary computer vision system, embedded into the common wraparound sunglasses form factor, that provides timely information about the surrounding environment to individuals who have impaired vision. WRAPPER uses a collection of AI, machine learning, and computer vision algorithms for detection, tracking, and reasoning, and WRAPPER presents this information to the user. This information is provided via rich auditory profiles (for those with severely impaired vision) or via small, embedded LCD screens (for those with partial vision). In our preliminary work, we identified six distinct modes that should exist in the WRAPPER product: Local navigation assistance relies on algorithms for detecting, tracking, and reasoning about obstacles, free corridors, doorways, intersections, drop-offs, and overhangs. Large-scale navigation assistance provides distance traveled, localization, and beacon-following in a mapped environment (e.g., leading the user directly to Macy's in the mall). Social awareness employs reliable person detection, tracking, and facial recognition. Gesture interpretation and analysis of social interactions can also be utilized here. General object recognition uses 3D matching techniques for quickly determining generic items of interest to the user (e.g., park benches, tables). Specific object identification will also be integrated into the system. Items of particular interest to the user can be quickly identified (e.g., discriminating between a $1 and a $5 bill or recognizing the user's dog at the dog park.) Generic vision processing such as text-to-speech, magnification of images, color enhancement, or other 2D filters that help low-vision users will be included. In our preliminary work, we identified key technical challenges that we expect to encounter when designing the final WRAPPER system to provide the six functions listed above. The high-level challenges are: Miniaturization Efficient, real-time computation Intelligent perception Interfaces for low-vision users Scientifically grounded user studies These are not independent challenges. Miniaturized hardware and power sources dictate an upper bound on available memory and computation.This in turn affects how reliable real-time software algorithms are.The richness of perception that can be gleaned from the sensors affects the fidelity of information that can be conveyed to the user. In future work, we will seek to strike a balance; keeping hardware to a practical size while ensure that the user interface is sufficiently reactive and precise to be of high utility to the low-vision user.

Agency
National Science Foundation (NSF)
Institute
Division of Industrial Innovation and Partnerships (IIP)
Type
Standard Grant (Standard)
Application #
1014231
Program Officer
Muralidharan Nair
Project Start
Project End
Budget Start
2010-07-01
Budget End
2010-12-31
Support Year
Fiscal Year
2010
Total Cost
$149,939
Indirect Cost
Name
Traclabs Inc.
Department
Type
DUNS #
City
San Antonio
State
TX
Country
United States
Zip Code
78216