Graphical representations often comprise an integral part of highly interactive interfaces to enhance human insight and creativity. The benefits of such interfaces are for the most part denied, however, to individuals who are blind or visually impaired, who are as a consequence are placed at a significant disadvantage with respect to learning throughout their lives. For young children, graphical representations in books are often critical for developing a vocabulary, as many objects cannot easily be obtained or safely handled physically. Later on, examination and interpretation of graphical information such as experimental time series data, mathematical waveforms and geographical diagrams are crucial for obtaining insight in these areas and a sign of creativity. Currently, the most frequent method of representing 2D graphical information so as to make it accessible to individuals who are blind or visually impaired is through static raised-line drawings, but this method is very poor at relaying information in unconstrained tasks such as those mentioned above. In this project the PI adopts a novel alternative haptics-based approach to address the challenge of developing more suitable interactive methods and appropriate representations that enhance understanding of unfamiliar information (whether patterns, groups of items, or individual items) and improve the user's ability to make discoveries or propose explanations. She will develop highly interactive graphics technology that allows the user to actively and separately control both the magnification and simplification of a graphic during its examination. This will allow the user to customize and vary as needed the trade-off between the two main limitations of haptic processing: the need to serially integrate information, and poor tactile spatial resolution. The enriched representations will use texture or vibration as a way to encode the separation of a graphic into objects and object parts, and to describe the 3D orientation of parts, which are two of the most difficult aspects of interpretation. The project will be conducted with input and feedback from members of the target user community, and validated experimentally.

Broader Impacts: This research will help move knowledge about human haptic processing from theory to practice. As the focus is on developing enabling technologies for higher level thinking skills, the project will empower individuals who are blind or visually impaired (and ultimately all users) with new tools to make more significant contributions to society. Thus, the work will in particular contribute to the effort of preventing individuals from the target community being left further behind as information technology advances forward, and will enable them to enjoy a better income and quality of life.

Project Report

Interactive and Enriched Haptic Graphical Representations for People who are Blind or Visually Impaired The main goal of this work was to significantly improve the conveying of graphical information to individuals who are blind or visually impaired through the haptic sense by taking into account the fundamental nature of the haptic system. The haptic sense consists of touch, as well as the sense of position of the joints and forces exerted by the muscles. The haptic sense is very different in how it processes information from the visual sense, with differing resulting strengths and weaknesses. This creates significant problems when users try to interpret tactile diagrams. The general idea of this work is to improve the representations of diagrams to facilitate their interpretation and to allow diagrams to be felt interactively through a computer, facilitating the ease of accessing diagram information. One of the ideas of this work was to develop methods of presenting information about pictures and diagrams that take advantage of one of the haptic system’s strength: namely the effectiveness of processing material properties such as texture. The other idea was to try and address the limitations of haptic processing for diagram information. Unfortunately, some of vision’s strengths, for which diagrams are designed, are actually haptic’s weaknesses: touch is approximately 10x worse than vision in perceiving details, and it cannot view the diagram all at once, but only piece by piece as the fingers scan across it. For the first of these ideas, our primary motivation was based on psychological evidence that individuals are able to process material information, such as texture, across multiple fingers at once in a search task. However, geometric information, such as the raised lines typically created for tactile diagrams, is processed in sequence, one finger at a time. We hypothesized if we could use texture information to encode information to help in the understanding of a diagram, such as difficult to determine information as what lines belonged to which parts of objects and what is the orientation of each part, we could improve performance. We then developed a computer interface device, with one tactile feedback channel per finger, to portray this information from a computer: this format was chosen as it allows for easy display of a diagram versus having to create it physically or using specialized equipment which is costly, slow and cumbersome. We found, through experimentation, that even for one finger, using texture to relay information in a diagram improved performance by a factor of two. Performance using multiple fingers improved even further, increasing by a factor of three. In contrast, using the traditional line diagrams did not improve performance at all with three fingers. For the second of these ideas, addressing haptic’s weaknesses, what is typically done is that a professional diagram maker, either through a software program or manually, removes details from the visual diagram. In addition, they almost always remove information not relevant for the specified task at hand. There are two problems with this: 1) if the user asks new questions, new diagrams need to be made, and 2) it greatly limits incidental learning by not allowing users to explore all the information. To address this problem, we examined ways of accessing details without providing information overload. One method was to use a software zooming function, as exists for visual diagrams on computers. However, because haptics processes information very differently than vision, using typical visual techniques can be slow (because the fingers have to scan the diagram serially, it can take much longer to realize that a new zoom level does not provide new information) and error prone (users may not figure out that objects may be clipped). We developed an algorithm that examines the local context of the area of the diagram when zooming and magnifies to allow the next level of detail in that local area to full screen. We found that our intuitive zooming technique and significant higher odds of a correct response than both the linear (odds ratio = 2.64) and logarithmic (odds ratio = 2.31) zooming techniques. Another method we examined was to allow users to select simplified alternatives to the original diagram depending on which information is needed. This information is meant to be extracted automatically rather than manually be a professional preparer. We found that users improved their performance significantly (over 46% on average) compared to having to use the original diagrams.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0712936
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2007-09-15
Budget End
2012-08-31
Support Year
Fiscal Year
2007
Total Cost
$397,382
Indirect Cost
Name
Virginia Commonwealth University
Department
Type
DUNS #
City
Richmond
State
VA
Country
United States
Zip Code
23298