The World Wide Web has evolved into an indispensable medium for dissemination of information, entertainment, commerce and education. However, the graphical nature of most browsing software, coupled with the diversity and complexity of web content, have limited access to this technology for an entire community of people with visual disabilities. Existing audio browsers that are based on text-to-speech conversion (e.g., screen readers) are not capable of describing the conceptual organization of a document's content or of letting a user select parts of a document to listen to. As a result, people with visual disabilities can find it difficult to perform common tasks, such as distinguishing topics or correlating similar items, which are key to understanding the organization of documents, and so waste considerable time and attention listening to irrelevant information. In this project the PI and his team will develop, test and disseminate the HearSay web browser, which will bring the browsing experience of people with visual disabilities closer to that of sighted people. HearSay is based on automated techniques for structuring the content of web documents into labeled partitions consisting of logically related items. By enabling interactive speech-driven guided exploration in which the system presents the document's labeled content and the user selects which parts of the content to listen to and when to navigate to a new page, HearSay will make non-visual browsing far less cumbersome. Furthermore, for repetitive browsing tasks, HearSay will let users create and retrieve personalized content in different ways, ranging from content-based voice-marking of selected partitions in a page to powerful personal information assistants that gather and present user-defined information at the user's command. To assure that the HearSay system is really useful "as advertised" to the intended community, the PIs have established a collaboration with Helen Keller Services for the Blind in Hempstead, NY, which trains people with visual disabilities, and will consult on the design and evaluation of HearSay.

Broader Impacts: This research will result in new algorithms and powerful technologies enabling end users to navigate web content using audio. This capability will be especially valuable to people who are visually impaired in that it will enable them to browse and customize web content by themselves, but it will also be useful for mobile users of small-form devices, reducing their dependence on specialized content providers. The PIs will develop a special version of HearSay for use with the Blackboard educational system, which will be advertised and disseminated to educational institutions so as to improve access to educational materials for students with visual disabilities. In coordination with this project, two workshops on technology for accessibility will be organized, one focusing on accessibility technology in general and the second on accessibility technology for postgraduate and adult education. These workshops will serve to raise awareness of the issues faced by those with visual disabilities and the potential for content-based techniques to improve information access for users who are mobile and/or have disabilities.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0534419
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2005-12-01
Budget End
2010-05-31
Support Year
Fiscal Year
2005
Total Cost
$550,623
Indirect Cost
Name
State University New York Stony Brook
Department
Type
DUNS #
City
Stony Brook
State
NY
Country
United States
Zip Code
11794