More than 160 million blind, low-vision, and deaf-blind people worldwide have not realized the full potential of the mobile revolution. People in these groups often use special-purpose portable devices to solve specific accessibility problems, such as obtaining product information from bar codes, finding location information via GPS, and accessing printed text using optical character recognition (OCR). Unfortunately, devices targeted at these groups are specialized for one or few functions, usually not networked, and expensive. Devices also target one disability, thereby preventing a deaf-blind person from, for instance, using a device designed for a low-vision person. Blind, low-vision, and deaf-blind people who can afford it must carry multiple devices with varying interfaces. This is despite the fact that many mainstream mobile devices already have the necessary sensors, such as a camera, microphone, GPS locator, accelerometer, and compass, to provide all of these functions on one device. MobileAccessibility is the PI's approach to providing useful mobile accessible functionality to blind, low-vision, and deaf-blind users. This approach leverages a smart phone's sensors, multi-modal output, and access to remote services to reduce the cost of existing accessibility solutions and enable completely new ones to be created. Some key user interaction problems for these groups of users that will be addressed in this project include: (i) how can a blind, low-vision, or deaf-blind person effectively use the camera on a smart phone to achieve an accessibility goal, (ii) how can enlarged presentations be effectively navigated by a low-vision person on the small screen of a smart phone, (iii) how can vibration be effectively used to convey information to a blind or deaf-blind person, (iv) how can valuable network services be best utilized by these communities, (v) how can the knowledge of one person about their environment be effectively captured, stored, and used among these communities. The user-centered design of these applications will involve blind, low-vision, and deaf-blind people throughout their development. Prototype applications to provide context to the research questions will be built for all three groups. Input will use speech recognition, the touch screen, and the keyboard. Output will be audio for blind users, enlargement for low-vision users, and vibration and tethering to Braille devices for deaf-blind and blind users. The resulting interfaces will be evaluated both in the lab and in the field. There will a focus on identifying common interaction techniques that can be employed by multiple applications.

Broader Impacts: This research represents a new paradigm in mobile assistive technologies where a single programmable device can serve a multitude of accessibility needs. Rather than using separate devices for different needs, accessibility solutions can be downloaded to a single device. The research challenge is to design, build, and evaluate novel accessibility solutions in this new paradigm. A mobile phone that can accomplish multiple accessibility tasks has the potential to provide the target communities with more independence than they have currently. Furthermore, the MobileAccessibility solution has the potential to be inexpensive and more sustainable than current accessibility solutions. Qualified students with disabilities will be recruited as researchers, giving them a chance to participate in work directly affecting them. New project-oriented curricula based on MobileAccessibility will be created.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1116051
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2011-08-01
Budget End
2015-07-31
Support Year
Fiscal Year
2011
Total Cost
$516,000
Indirect Cost
Name
University of Washington
Department
Type
DUNS #
City
Seattle
State
WA
Country
United States
Zip Code
98195