The inability to access information on printed signs directly impacts the mobility independence of the over 1.2 million blind persons in the U.S. Many previously proposed technological solutions to this problem either required physical modifications to the environment (talking signs or the placement of coded markers) or required the user to carry around specialized computational equipment, which can be stigmatizing. A recently pursued strategy is to utilize the computational capabilities of smart phones and techniques from computer vision to allow blind persons to read signs at a distance using commercially available, non-stigmatizing, smart- phones. However, despite the fact that sophisticated algorithms exist to recognize and extract sign text from cluttered video input (as evidenced, for example, by mapping services such as Google Maps automatically locating and blurring out only license plate text in street-view maps) current mobile solutions for reading sign text at a distance perform relatively poorly. This poor performance is largely because until recently, smart-phone processors have simply not been able to execute state-of-the-art computer vision text extraction and recognition algorithms at real-time rates, which forced previous mobile sign readers to utilize older, simplistic, less effective algorithms. Next-generation smart-phones run on fundamentally different, hybrid processor architectures (such as the Tegra 4, Snapdragon 800, both released in 2013) with dedicated embedded graphical processing units (GPUs) and multi-core CPUs, which make them ideal for high-performance, vision-heavy computation. In this study, we propose to develop a smart-phone-based system for finding and reading signs at a distance which significantly outperforms previous such readers by implementing state-of-the-art text extraction algorithms on modern smart-phone hybrid GPU/CPU processor architectures. In Phase I, the proposed system will be developed and tested with blind users. In Phase II, feedback from user testing will be integrated into system design and the performance will be improved to permit operation in extremely challenging (such as low light) environments.

Public Health Relevance

Over 1.2 million people in the US are blind, and lack of safe and independent mobility substantially impacts the quality of life of this population. Printed textual signs, which are ubiquitously used in sighted navigation, are inaccessible to visually impaired persons, and this lack of access to environmental information contributes significantly to the mobility problem. This research would help develop a system whereby blind persons could use commercially available smart-phones to locate and read sign text at a distance.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Small Business Innovation Research Grants (SBIR) - Phase I (R43)
Project #
1R43EY024800-01
Application #
8779810
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Wujek, Jerome R
Project Start
2014-09-30
Project End
2015-09-29
Budget Start
2014-09-30
Budget End
2015-09-29
Support Year
1
Fiscal Year
2014
Total Cost
Indirect Cost
Name
Lynntech, Inc.
Department
Type
DUNS #
City
College Station
State
TX
Country
United States
Zip Code
77845