The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to encode a world full of objects, people and things? During the past year we have continued to investigate i) the interaction between bottom up (sensory driven) and top-down (internally driven) processing in the brain, focusing on visual mental imagery, working memory and effect of task, and ii) perception of complex visual stimuli, focusing most recently on visual scenes. i) Top-down processing. Our visual perception is the product of an interaction between bottom-up sensory information and top-down signals guiding interpretation of the input and reflecting prior knowledge and intent. Mental imagery, in the absence of sensory input, relies entirely on this top-down signal, and provides an opportunity to investigate its impact on sensory cortical areas. Using fMRI we conducted a detailed comparison of visual imagery and perception for individual complex objects. We found that (1) we can decode the identity of the specific object participants view or imagine in multiple brain regions, and (2) imagery and perceptual information are distributed differently throughout the visual processing stream. These findings suggest that while imagery and perception engage the same brain regions, the neural dynamics operating under imagery and perception are different. During working memory, object information is also held in mind in the absence of sensory input. We have been investigating which cortical regions are active during working memory and what information they contain (visual versus conceptual information). Finally, different tasks require different types of information to be extracted from visual stimuli, relying on top down signals. We have been investigating how the representations of complex visual stimuli vary according to the task a participant is performing. In multiple regions throughout the brain, we found information about task. Further, this information was different for tasks that emphasized physical properties of the visual stimuli and tasks that emphasized conceptual properties. ii) Perception of complex visual stimuli. Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified a brain region that appears specialized for scene processing, it remains unclear exactly what the precise role of this region is. We presented participants with large numbers of complex real-world scenes and used a data-driven fMRI approach to identify the nature of representations in this region. We found that scene representations in this region primarily reflect the spatial properties of scenes (e.g. whether they are open or closed) and not the semantic properties (i.e. scene category). Further, we examined how different elements of complex visual scenes are represented across the different brain regions engaged during scene viewing. Specifically, we created artificial visual scenes comprising a single object on a spatial background, enabling us to tease apart the spatial and object information represented in different brain regions. Some regions contained information primarily about objects, some spatial properties, while one region contained information about both space and objects. These properties were consistent with a recent neuroanatomical framework we developed describing the processing of visuospatial information in the brain. Elucidating how the brain enables us to recognize objects, scenes, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.

Project Start
Project End
Budget Start
Budget End
Support Year
5
Fiscal Year
2012
Total Cost
$820,245
Indirect Cost
Name
U.S. National Institute of Mental Health
Department
Type
DUNS #
City
State
Country
Zip Code
Baker, Chris I; van Gerven, Marcel (2018) New advances in encoding and decoding of brain signals. Neuroimage 180:1-3
Hebart, Martin N; Bankson, Brett B; Harel, Assaf et al. (2018) The representational dynamics of task and object processing in humans. Elife 7:
Malcolm, George L; Silson, Edward H; Henry, Jennifer R et al. (2018) Transcranial Magnetic Stimulation to the Occipital Place Area Biases Gaze During Scene Viewing. Front Hum Neurosci 12:189
Bankson, B B; Hebart, M N; Groen, I I A et al. (2018) The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. Neuroimage 178:172-182
Silson, Edward H; Reynolds, Richard C; Kravitz, Dwight J et al. (2018) Differential Sampling of Visual Space in Ventral and Dorsal Early Visual Cortex. J Neurosci 38:2294-2303
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher et al. (2018) Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife 7:
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel et al. (2018) Bayesian population receptive field modelling. Neuroimage 180:173-187
Hebart, Martin N; Baker, Chris I (2018) Deconstructing multivariate decoding for the study of brain function. Neuroimage 180:4-18
Torrisi, Salvatore; Chen, Gang; Glen, Daniel et al. (2018) Statistical power comparisons at 3T and 7T with a GO / NOGO task. Neuroimage 175:100-110
Roth, Zvi N; Heeger, David J; Merriam, Elisha P (2018) Stimulus vignetting and orientation selectivity in human visual cortex. Elife 7:

Showing the most recent 10 out of 44 publications