As computer vision object recognition algorithms improve in accuracy and speed, and computers become more powerful and compact, it is becoming increasingly practical to implement such algorithms on portable devices such as camera-enabled cell phones. This """"""""mobile vision"""""""" approach allows normally sighted users to identify objects, signs, places and other features in the environment simply by snapping a photo and waiting a few seconds for the results of the object recognition analysis. The approach holds great promise for blind or visually impaired (VI) users, who may have no other means of identifying important features that are undetectable by non-visual cues. However, in order for the approach to be practical for VI users, the interaction between the user and the environment using the camera must be properly facilitated. For instance, since the user may not know in advance which direction to point the camera towards a desired target, he or she must be able to pan the camera left and right to search for it, and receive rapid feedback whenever it is detected. Drawing on past experience of the PI and his collaborators on object recognition systems for VI users, we propose to study the use of mobile vision technologies for exploring features in the environment, specific examining the process of discovering these features and obtaining guidance towards them. Our main objectives are to investigate the strategies adopted by users of these technologies to expedite the exploration process, devise and test maximally effective user interfaces consistent with these strategies, and to assess and benchmark the efficiency of the technologies. The result will be a set of minimum design standards that will specify the system performance parameters, the user interface functionality and the operational strategies necessary for any mobile vision object recognition system for VI users.

Public Health Relevance

The ability to locate and identify objects, places and other features in the environment is taken for granted every day by the sighted, but this fundamental capability is missing or severely degraded in the approximately 10 million Americans with significant vision impairments and a million who are legally blind. The proposed research would investigate how computer vision object recognition technologies, which are now being implemented on mobile vision devices such as cell phones but are typically designed for normally sighted users, could be modified and harnessed to meet the special needs of blind and visually impaired persons. Such research could lead to new technologies to dramatically improve independence for this population

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Exploratory/Developmental Grants (R21)
Project #
1R21EY021643-01
Application #
8097202
Study Section
Special Emphasis Panel (ZRG1-ETTN-E (92))
Program Officer
Wiggs, Cheri
Project Start
2011-09-30
Project End
2013-08-31
Budget Start
2011-09-30
Budget End
2012-08-31
Support Year
1
Fiscal Year
2011
Total Cost
$205,070
Indirect Cost
Name
University of California Santa Cruz
Department
Engineering (All Types)
Type
Schools of Engineering
DUNS #
125084723
City
Santa Cruz
State
CA
Country
United States
Zip Code
95064
Manduchi, Roberto; Coughlan, James M (2014) The Last Meter: Blind Visual Guidance to a Target. Proc SIGCHI Conf Hum Factor Comput Syst 2014:3113-3122