The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to encode a world full of objects, people and things? During the past year we have completed two projects investigating the neural representations of faces/bodies and objects. In the first 1, we showed that brain regions specialized for processing face and body stimuli are shaped by experience. In particular, we found that face and body representations are strongest for face and body parts in their commonly experienced configurations (e.g. right body parts in the left visual field). This finding demonstrates that natural experience can have a strong effect on cortical representations. In the second 2, we showed that the brain does contain completely abstract visual representations of visual stimuli (e.g. everyday objects). In particular, using both behavioral and fMRI measures we found that representations of particular objects are specific to locations in the visual field and do not generalize across position. These findings provide important insights into object recognition in the brain, and suggest that current models of cortical visual processing, which often assume position invariance, do not accurately reflect the underlying mechanisms. We have also initiated two major new projects investigating i) visual mental imagery, and ii) scene perception. i) Visual Mental Imagery. Our visual perception is the product of an interaction between bottom-up sensory information and top-down signals guiding interpretation of the input and reflecting prior knowledge and intent. Mental imagery, in the absence of sensory input, relies entirely on this top-down signal, and provides an opportunity to investigate its impact on sensory cortical areas. We are using fMRI to conduct a detailed comparison of visual imagery and perception for individual complex objects. Specifically, we are asking (1) whether we can decode the identity of the specific object participants view or imagine, and (2) how imagery and perceptual information is distributed throughout the visual processing stream. ii) Scene Perception. Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified a brain region that appears specialized for scene processing, it remains unclear exactly what the precise role of this region is. We are presenting participants with large numbers of complex real-world scenes and using a data-driven fMRI approach to identify the nature of representations in this region. In particular we are examining whether the scene representations in this region are primarily semantic (e.g. beach, mountain, city) or spatial (e.g. open versus closed scenes) in an effort to understand the role of this region in navigation. Elucidating how the brain enables us to recognize objects, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.

Project Start
Project End
Budget Start
Budget End
Support Year
3
Fiscal Year
2010
Total Cost
$862,054
Indirect Cost
Name
U.S. National Institute of Mental Health
Department
Type
DUNS #
City
State
Country
Zip Code
Baker, Chris I; van Gerven, Marcel (2018) New advances in encoding and decoding of brain signals. Neuroimage 180:1-3
Hebart, Martin N; Bankson, Brett B; Harel, Assaf et al. (2018) The representational dynamics of task and object processing in humans. Elife 7:
Malcolm, George L; Silson, Edward H; Henry, Jennifer R et al. (2018) Transcranial Magnetic Stimulation to the Occipital Place Area Biases Gaze During Scene Viewing. Front Hum Neurosci 12:189
Bankson, B B; Hebart, M N; Groen, I I A et al. (2018) The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. Neuroimage 178:172-182
Silson, Edward H; Reynolds, Richard C; Kravitz, Dwight J et al. (2018) Differential Sampling of Visual Space in Ventral and Dorsal Early Visual Cortex. J Neurosci 38:2294-2303
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher et al. (2018) Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife 7:
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel et al. (2018) Bayesian population receptive field modelling. Neuroimage 180:173-187
Hebart, Martin N; Baker, Chris I (2018) Deconstructing multivariate decoding for the study of brain function. Neuroimage 180:4-18
Torrisi, Salvatore; Chen, Gang; Glen, Daniel et al. (2018) Statistical power comparisons at 3T and 7T with a GO / NOGO task. Neuroimage 175:100-110
Roth, Zvi N; Heeger, David J; Merriam, Elisha P (2018) Stimulus vignetting and orientation selectivity in human visual cortex. Elife 7:

Showing the most recent 10 out of 44 publications