The aim of this research is to investigate spatial learning and navigation within and across sensory modalities, between sighted, low vision and blind participant groups. The general hypothesis posits that spatial information learned from individual sensory modalities leads to the formation of a common spatial representation across modalities. Experiment I and II establish if some of the perceptual biases known for vision also manifest in tactile learning. This work represents the first effort to directly compare visual and tactile map learning and to investigate performance as a function of visual status. Experiment III uses a cross-modal learning paradigm to assess if functionally equivalent spatial representations are formed between learning with vision, touch, and spatial language. This is the first study of its kind to directly compare environmental learning with information matched environments across three sensory modalities. The final outcome of this research will significantly add to our understanding of spatial learning between the senses and speak to the extent of functional equivalence that exists across spatial representations. The results of these experiments will also benefit the development of navigation systems for the blind.
Bennett, Christopher R; Loomis, Jack M; Klatzky, Roberta L et al. (2017) Spatial updating of multiple targets: Comparison of younger and older adults. Mem Cognit 45:1240-1251 |
Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R et al. (2013) Perception of 3-D location based on vision, touch, and extended touch. Exp Brain Res 224:141-53 |
Giudice, Nicholas A; Betty, Maryann R; Loomis, Jack M (2011) Functional equivalence of spatial images from touch and vision: evidence from spatial updating in blind and sighted individuals. J Exp Psychol Learn Mem Cogn 37:621-34 |