The proposed research investigates a representation of spatial layout that serves to guide action in the absence of direct perceptual support. We call this representation a """"""""spatial image."""""""" Humans can perceive surrounding space through vision, hearing, and touch. Environmental objects and locations are internally represented by modality-specific """"""""percepts"""""""" that exist as long as they are supported by concurrent sensory stimulation from vision, hearing, and touch. When such stimulation ceases, as when the eyes close or a sound source is turned off, the percepts also cease. A spatial image, however, continues to exist in the absence of the percept. For example, when one views an object and then closes the eyes, one experiences the continued presence of the object at its perceptually designated location. Although the phenomenological properties of the spatial image are known only to the observer, functional characteristics of spatial images can be revealed through systematic investigation of the behavior of the observer on a spatial task like spatial updating. For example, the observer might try to walk blindly to the location of a previously viewed object along any of a variety of paths. A sizeable body of research indicates that people have an impressive ability to do so. An important property of spatial images is that they function equivalently in many cases, despite variations in the input sensory modality. In previous work, the PI's have shown that distinct input modalities, like vision and audition, induce equivalent performance on a variety of spatial tasks. Perhaps even more surprising, spatially descriptive language was found to produce spatial images that are functionally equivalent, or nearly so, as revealed by performance on spatial tasks. Our hypothesis is that the different spatial modalities of vision, touch, hearing, and language all feed into a common amodal representation. Spatial images can also be created by retrieving information about spatial layout from long-term memory. Importantly, blind individuals are able to perform many spatial tasks because spatial images are not restricted to the visual modality. Although most of our understanding of spatial images comes from laboratory experiments that seem unrepresentative of everyday life, it is important to realize the pervasiveness of spatial images in the lives of sighted and blind people. For both populations, there are many circumstance where maintaining a spatial image of the immediately surrounding environment (e.g., working at the office, playing sports) allows individuals to rapidly redirect their activity to objects without having to re-initiate search for them. This leads to fluency of action with minimal effort. Our proposed research will further our knowledge about spatial images produced by visual, haptic, auditory, and language input as well as those activated by retrieval of spatial information from long-term memory. Our research consists of theoretically-based experiments involving sighted and blind subjects. All of the experiments rely on logic to make inferences about internal processes and representations from observed behavior, such as verbal report, joystick manipulation, and more complex spatial actions, like reaching, pointing, and walking. Our experiments are grouped into 3 topics. The first topic is concerned with establishing further properties of spatial images. Four of the five experiments under this topic are concerned with whether touch and vision produce spatial images that are functionally similar;the fifth will investigate possible interference between spatial images from perception and those from long-term memory. The five experiments within the second topic exploit different paradigms and logic for testing whether spatial images from different sensory modalities are amodal (retaining no information about the encoding modality) or modality-specific (retaining information about the encoding modality). The third topic is concerned with whether spatial images are equally precise in all directions around the head, in contrast to visual images which are thought to be of high precision only when located in front of head. The primary significance of this research will be the expansion of knowledge of multimodal spatial images, which so far have received very little scientific attention in comparison with visual images, about which hundreds of scientific papers have been published. This knowledge will further our understanding of the extent to which spatial cognition is similar in sighted and blind people. This knowledge will also be useful for researchers and technologists who are developing assistive technology, including navigation systems, for blind and visually impaired people. More generally, this knowledge will lead to improved tests of spatial cognition that will be useful in better understanding the deficits in knowledge and behavior resulting from diseases, such as Alzheimer's, and brain damage.

Agency
National Institute of Health (NIH)
Institute
National Eye Institute (NEI)
Type
Research Project (R01)
Project #
5R01EY016817-02
Application #
7895597
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Wiggs, Cheri
Project Start
2009-08-01
Project End
2012-07-31
Budget Start
2010-08-01
Budget End
2012-07-31
Support Year
2
Fiscal Year
2010
Total Cost
$270,355
Indirect Cost
Name
University of California Santa Barbara
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
094878394
City
Santa Barbara
State
CA
Country
United States
Zip Code
93106
Bennett, Christopher R; Loomis, Jack M; Klatzky, Roberta L et al. (2017) Spatial updating of multiple targets: Comparison of younger and older adults. Mem Cognit 45:1240-1251
Giudice, Nicholas A; Bennett, Christopher R; Klatzky, Roberta L et al. (2017) SPATIAL UPDATING OF HAPTIC ARRAYS ACROSS THE LIFE SPAN. Exp Aging Res 43:274-290
Klatzky, Roberta L; Giudice, Nicholas A; Bennett, Christopher R et al. (2014) Touch-screen technology for the dynamic display of -2D spatial information without vision: promise and progress. Multisens Res 27:359-78
Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R et al. (2013) Perception of 3-D location based on vision, touch, and extended touch. Exp Brain Res 224:141-53
Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan et al. (2012) Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis. Atten Percept Psychophys 74:1260-7
Wolbers, Thomas; Klatzky, Roberta L; Loomis, Jack M et al. (2011) Modality-independent coding of spatial layout in the human brain. Curr Biol 21:984-9
Klatzky, Roberta L; Abramowicz, Aneta; Hamilton, Cheryl et al. (2011) Irrelevant visual faces influence haptic identification of facial expressions of emotion. Atten Percept Psychophys 73:521-30