Spoken language systems, consisting of speech recognition, speech synthesis, natural language processing, and human-computer interfaces allow people to use speech to communicate with computers. The system proposed here provides access to wayfinding/route information for the visually impaired. The system proposed here is a spoken language system which can recognize the speech of any speaker and requires t=no custom hardware. A speaker independent speech recognition system developed during phase I will be the base for the system. Phase I has two primary goals: 1. Demonstrate the feasibility of combining Hidden Markov Models, Segmental models, and Artificial neural networks to improve recognition performance. We propose to tightly integrate these technologies in a succinct mathematical framework. 2. Determine the needs of visually impaired persons for wayfinding information and use this to design a language model which constrains the recognition process without appearing to constrain the users' interactions. If successful, the technology proposed here would permit the development of more powerful, cost-effective spoken language systems than are currently available for making wayfinding information available to persons with visual impairments.