The broad goal of the proposed research is to characterize the way that humans perceive and represent views of reachable environments. ?Reachspaces? are an integral part of our world and day-to-day experience, but how these views are processed by the visual cognitive system is unknown. The present work aims to offer fundamental insights into the structure of internal representations of the reachable environments. First, a large database of reachspace images will be constructed. Next, this proposal will leverage a recently developed approach that employs a large-scale behavioral experiment, coupled with a sparse positive embedding model, which will be used to derive the attributes underlying the similarity structure of reachspace views. Finally, brain responses to a selected subset of reachspace views will be measured and modeled with the derived dimensions. These approaches have been tested and used to great advance in domains of object and scene processing; here, we employ them on the novel stimulus domain of reachspaces. This work has the potential to form new bridges between the vision and visuomotor communities by characterizing high-level visual representations of reachable environments and will expand scientific understand of how humans perceive the visual environment.
This project examines the how the human visual system processes views of reachable environments. This work will advance our understanding of how humans represent the near scale physical world through vision, and is relevant for characterizing major deviations in the neural organization across individuals with visual system impairments.