The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to encode a world full of objects, people and things? During the past year we have completed two projects investigating i) visual mental imagery, and ii) scene perception. i) Visual Mental Imagery. Our visual perception is the product of an interaction between bottom-up sensory information and top-down signals guiding interpretation of the input and reflecting prior knowledge and intent. Mental imagery, in the absence of sensory input, relies entirely on this top-down signal, and provides an opportunity to investigate its impact on sensory cortical areas. Using fMRI we conducted a detailed comparison of visual imagery and perception for individual complex objects. We find that (1) we can decode the identity of the specific object participants view or imagine in multiple brain regions, and (2) imagery and perceptual information are distributed differently throughout the visual processing stream. These findings suggest that while imagery and perception engage the same brain regions, the neural dynamics operating under imagery and perception are different. ii) Scene Perception. Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified a brain region that appears specialized for scene processing, it remains unclear exactly what the precise role of this region is. We presented participants with large numbers of complex real-world scenes and used a data-driven fMRI approach to identify the nature of representations in this region. We found that scene representations in this region primarily reflect the spatial properties of scenes (e.g. whether they are open or closed) and not the semantic properties (i.e. scene category). Further, we have started to examine how different elements of complex visual scenes are represented across the different brain regions engaged during scene viewing. Specifically, we have created artificial visual scenes comprising a single object on a spatial background, enabling us to tease apart the spatial and object information represented in different brain regions. In particular we are trying to relate the representations contained within regions with their anatomical connectivity. Elucidating how the brain enables us to recognize objects, scenes, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.
Showing the most recent 10 out of 44 publications