Approximately eight-million Americans are blind or have low vision as defined by a difficulty reading common newsprint with corrective lenses (McNeil, 2001). The loss of vision can have serious repercussions on simple tasks such as cooking, reading a magazine, paying a cashier or navigating. For many activities, there are either visual aids (e.g., magnifiers, Braille, long cane or guide dog) or strategies (folding money) that can be used to compensate for the vision loss. However, currently, there is no widely accepted system for aiding helping someone with low vision with he problem of way finding. Way finding refers to the process of navigating from one location within a large-scale space (such as a building or a city) to another, unobservable, location. It is different than the problem of obstacle avoidance where a long cane or guide dog can be used to navigate around ? a local obstacle. The goal of the current research proposal is to develop a low-vision navigation aid that guides and localizes a user within an unfamiliar indoor environment (e.g., an office building or a hospital) to their goal. The proposed low-vision navigation aid is based upon an existing robot navigation algorithm that uses partially observable Markov decision processes (POMDP Kaelbling, Cassandra, & Kurien, 1996; Kaelbling Littman, & Cassandra, 1998; Cassandra, Kaelbling, & Littman, 1994; Stankiewicz, Legge, & Sehlicht, 2001). The proposed navigation aid is composed of a laser range finder, a POMDP algorithm implemented on a computer and a digital map of the building. Fundamentally, the model will use measurements taken with the laser range finder from the user's position to the nearest wall in the direction that the user is facing. Using this measurement the POMDP model will reference the map and compute the locations that the observation (i.e., the distance measurement) could have been taken. Given this collection of locations, the POMDP algorithm computes the optimal action (e.g., rotate by 9if) that will get the user to their goal location using the minimum number of instructions on average. This process is repeated (i.e., instruction, generate action update spatial uncertainty, compute optimal action) until the user reaches their destination. The model is designed to deal with the noise that will be inherent in the measurements taken and the actions generated by a low vision user. Unlike its predecessors such as Talking Signs, Verbal Landmarks and Talking Lights the current system does not use any beacon technology for guidance. Thus, the proposed system has very little infrastructure investments and we anticipate that it will be a low cost robust navigation system for low vision, blind and potentially, normally sighted users in unfamiliar buildings. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Small Research Grants (R03)
Project #
5R03EY016089-03
Application #
7187406
Study Section
Special Emphasis Panel (ZEY1-VSN (01))
Program Officer
Oberdorfer, Michael
Project Start
2004-12-01
Project End
2007-11-30
Budget Start
2006-12-01
Budget End
2007-11-30
Support Year
3
Fiscal Year
2007
Total Cost
$142,352
Indirect Cost
Name
University of Texas Austin
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Hamid, Sahar N; Stankiewicz, Brian; Hayhoe, Mary (2010) Gaze patterns in navigation: encoding information in large-scale environments. J Vis 10:28
Modayil, Joseph; Kuipers, Benjamin (2008) The Initial Development of Object Knowledge by a Learning Robot. Rob Auton Syst 56:879-890
Kuipers, Benjamin (2008) Drinking from the firehose of experience. Artif Intell Med 44:155-70
Stankiewicz, Brian J; Kalia, Amy A (2007) Acquisition of structural versus object landmark knowledge. J Exp Psychol Hum Percept Perform 33:378-90
Stankiewicz, Brian J; Legge, Gordon E; Mansfield, J Stephen et al. (2006) Lost in virtual space: studies in human and ideal spatial navigation. J Exp Psychol Hum Percept Perform 32:688-704