The focus of this project is on implementation of a navigation assistant that uses a collection of sensing modalities and algorithms to guide a blind person through the knowledge landscape (e.g., social context, visual landmarks, scene functionality) of an unfamiliar environment. The approach is based on a portfolio of complex processes that provide within a single framework a coherent account of the state of the world, with the help of novel techniques which meld information at various levels of abstraction. In the near term, project outcomes will directly improve the quality of life for those with visual impairments through public release of a smartphone app. In the longer term, the societal impact of this research will extend beyond improving sensory capabilities for the blind in that it describes an approach towards human augmentation through the use of machine intelligence. The work will directly shed light on the variety of environmental knowledge which can be automatically acquired using machine perception, and how that information can be conveyed through a physical co-robot interface. From an educational perspective, this work will develop important models for integrating knowledge obtained by intelligent machines into one source, and will also develop new theories regarding the translation of that rich knowledge in a manner which can be easily understood by the user.

Leveraging prior work, sensing modalities such as Bluetooth low energy beacons, depth sensors, color cameras and wearable inertial motion units will be used to enable continuous localization within a novel environment. An additional layer of higher-order algorithms will further build upon physical measurements of location to develop computational contextual awareness, enabling the navigation assistant to understand the knowledge landscape by identifying meaningful visual landmarks, modes of interaction (functionality) within the environment and social context. This knowledge structure will then be conveyed to the blind user to enable contextual hyper-awareness, that is to say a contextual understanding of the environment which goes beyond normative sensing capabilities, in order to augment the user's ability to navigate the knowledge landscape of the environment. The navigation assistant will be instantiated as two concrete manifestations: a compact wearable interface, and a physical robotic interface. The wearable interface will be a smartphone-based system that gives audio-based navigation feedback to facilitate the creation of a cognitive map. The robotic interface will be a wheeled hardware platform that guides the user through haptic feedback to further reduce the cognitive load of interpreting and following audio feedback. Both platforms will be refined and evaluated in real-world scenarios based on principles derived from rigorous user studies. Project outcomes will include a navigation assistant that can help a blind person walk a path through novel indoor or outdoor suburban environments to a desired destination. The two physical interfaces will also be used to develop working theories and models for co-robot scenarios that must take into account situational context and the preferential dynamics of the user.

Project Start
Project End
Budget Start
2016-09-01
Budget End
2021-08-31
Support Year
Fiscal Year
2016
Total Cost
$1,000,000
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213