How do people learn large-scale spaces, like new towns and cities that they visit, as they navigate? Addressing this question poses surprising obstacles, such as the difficulty in optimizing large-scale spaces for experimental testing and controlling for pre-existing knowledge. Desktop virtual reality offers one possible way to address this question, although such testing offers an incomplete rendition of the full-body, immersive experience that is real-world navigation. Researchers will develop a 2-D treadmill coupled with a head-mounted display to allow free ambulation of large-scale virtual spaces. Successful development of this device has important societal applications. For example, pre-training with enriched body-based cues has the potential to increase knowledge transfer to real world environments, which could be helpful for training individuals such as first-responders and navigation in wilderness environments. Also, the device and proposed experiments will provide a completely novel understanding of the neural basis of human spatial navigation with body-based cues, fundamental to accurately modeling spatial cognition and understanding why we often get lost when we visit new cities.

Almost all theories of the neural basis of spatial navigation, largely developed in freely navigating rodents, assume the critical importance of importance of body-based cues to this code. Yet the vast majority of studies in humans involve navigation in desktop virtual reality. The novel device that will be developed will permit 2-D locomotion-based VR navigation, allowing a full range of body/head rotations and ambulation. The experiments will determine 1) the contributions of body-based input to human spatial navigation and how navigation in VR with body-based can enhance subsequent knowledge of real world environments 2) how the brain codes spatial distance by employing simultaneous EEG recordings 3) how the brain codes the relative directions of landmarks in the environment by modeling the underlying multidimensional brain networks using high-resolution functional magnetic imaging (fMRI). The outcomes from these experiments will be important to testing models of spatial navigation and advancing our understanding of the extent to which we employ visual vs. body-based cues to represent spatial environments, currently an issue of significant debate in the field.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Type
Standard Grant (Standard)
Application #
1922439
Program Officer
Kenneth Whang
Project Start
Project End
Budget Start
2018-07-30
Budget End
2021-08-31
Support Year
Fiscal Year
2019
Total Cost
$504,390
Indirect Cost
Name
University of Arizona
Department
Type
DUNS #
City
Tucson
State
AZ
Country
United States
Zip Code
85719