With funding from NSF, the researchers will study how the human brain represents dimensions that are needed for finding our way around in the world. The interdisciplinary approach combines cognitive and visual neuroscience methods, aimed towards understanding how the human brain processes spatial navigation information. The studies incorporate behavioral, cognitive, and neuroimaging techniques to examine how the human brain codes for distance, heading direction, speed, and time, which may contribute to higher-level navigation mechanisms, such as planning a route to a known destination or finding one's way home. The results have the potential to impact other fields, including robotics and spatial sciences. In robotics, autonomous systems have difficulty determining whether they have successfully returned back to their origin after an outbound journey, which robotics researchers call the loop closure problem. In contrast, humans and animals can readily solve this problem. Understanding how visual information is used to localize and orient will provide knowledge that could potentially facilitate innovation in mobile robots and self-driving cars or training for more efficient navigation in humans. Greater knowledge of the basic properties of navigation in humans could also lead to improved electronic navigation systems, emergency response training, and more effective transportation signage.
The scientific goals harness the strengths of cognitive neuroscience, visual neuroscience, and spatial sciences to examine navigation in humans. While much is known about the navigation system in rodents, the rat and primate have fundamentally different visual systems. Contributions from the visual system provide critical information necessary for self-motion guided navigation, and the theoretical basis for this proposal stems from computational models that posit that perceptual information, including optic flow, speed, and direction signals, are necessary for successful navigation. The researchers propose a framework in which spatial representations transform from a retinotopic to a spatiotopic organization. This framework posits testable hypotheses about the nature of self-motion guided navigational representations in the brain. A series of experiments will examine how the human brain codes lower-level representations, such as distance, heading direction, speed, and time, which may serve as basis functions for generating higher-level level navigational representations. The studies will examine how these selective properties are spatially organized in the brain, as well as the higher-level computations that bring this information together to compute path integration. To do so, the proposed studies employ innovative functional MRI paradigms adapted from visual neuroscience, including population receptive-field mapping, phase-encoded analyses, and model-based time-series analyses. The proposed work is critical for extending computational models of navigation to the systems level in humans.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.