The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to reveal a world full of objects, people and things? During the past year we have focused on the perception of complex visual stimuli, in particular real-world visual scenes and objects (NCT00001360). Perception of real world scenes: Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified three major brain regions that appear to be specialized for scene processing, it remains unclear what the precise roles of these different regions are and what information they contain. Building on a general framework for visual processing we proposed in the past few years, we have been investigating the basic properties of these three scene-selective regions and trying to elucidate how these three regions interact to enable us to understand the world before our eyes. In particular, we have been investigating the extent to which these regions can be explained in terms of encoding low level visual properties (e.g. contrast, color, edges) versus high level properties (e.g. objects, category, actions that can be performed in the depicted scene). One main area of focus has been the relationship between retinotopy (point-by-point mapping of the visual field onto the cortical surface of the brain) and category-selectivity (differential responses to images from different visual categories e.g. scenes versus faces). We evaluate retinotopy by presenting fragments of scenes to specific portions of the visual field and measure the response across the brain with functional magnetic resonance imaging (fMRI). Similarly, we measure category-selectivity by presenting images from different categories (e.g. faces, scenes, objects, bodies) and measuring the associated brain response. We find that there is no simple relationship between these two different organizational principles. Category-selective regions exhibit retinotopy, but individual category-selective regions overlap multiple maps. These results suggests that individual category-selective regions may contain multiple sub-regions within specific retinotopic maps that perform separate computations on the images. We have recently tested multiple models of scene processing and found that the responses in visual cortex can be well predicted by deep neural networks, suggesting that the responses are primarily driven by mid-level visual features and not object-based properties or higher level functional associations of scenes. One of the scene-selective regions is found in medial parietal cortex and is often implicated in memory function and spatial navigation. Our data show that there may be a gradient of function within medial parietal cortex. Posterior regions show strong retinotopy and scene-selectivity and are most strongly connected with other regions of posterior visual cortex. In contrast anterior regions are much less retinotopic and scene-selective but show strong connectivity with regions of ventral temporal cortex and parietal cortex that are implicated in memory. Further, asking subjects to construct scenes by recalling specific episodes from their memory elicits activation overlapping with these anterior regions. Across subjects, there is high consistency in the anatomical location of regions showing perceptual scene selectivity and those showing scene construction responses with the fundus of the parieto-occipital sulcus separating them. To test the functional role of these scene-selective regions, we focused on a region in occipito-parietal cortex and applied transcranial magnetic stimulation (TMS) to temporarily disrupt neural processing during a task in which subjects freely viewed scene stimuli. We found that gaze behavior was disrupted in a retinotopically-specific manner, suggesting a role for this region in guiding eye movements across scenes. Collectively these results provide important insights into the brain network that is involved in processing real-world visual scenes and we have recently developed a specific framework for thinking about the distributed processing of scenes within visual cortex. We are continuing to evaluate the specific roles of scene-selective regions by i) using TMS to temporarily disrupt their function and observe the impact on behavior; ii) comparing explicit computational and theoretical models of scene representation with the representations observed in different parts of the brain. Perception of Objects: Visual object perception is typically thought to arise from hierarchical processing of the visual input in the brain producing increasingly abstract representations from edges to mid-level features to whole objects to conceptual properties. To understand the nature of object processing, we have processed on two specific issues: 1) Temporal evolution of object representations To estimate a lower temporal limit for the emergence of conceptual representations we measured brain responses using magnetoencephalography (MEG), which provides measurements with high temporal resolution, while subjects viewed a large set of object images. By testing models derived from behavioral judgments, semantics and different layers of deep neural networks, we estimate that conceptual object representations start to emerge around 150 ms following stimulus onset. Prior to this time, responses primarily seem to reflect specific visual features of the images and show little generalization across different images of the same type of object. 2) Representational dynamics of both task and object processing Despite the importance of an observers goals in determining how a visual object is categorized, surprisingly little is known about how humans process the task context in which objects occur and how it may interact with the processing of objects. Using MEG, we studied the spatial and temporal dynamics of task and object processing. Our results reveal a sequence of separate but overlapping task-related processes spread across frontoparietal and occipitotemporal cortex. Task exhibited late effects on object processing by selectively enhancing task-relevant object features, with limited impact on the overall pattern of object representations. Combining MEG with previously collected fMRI data, we reveal a parallel rise in task-related signals throughout the cerebral cortex, with an increasing dominance of task over object representations from early to higher visual areas. Collectively, our results reveal the complex dynamics underlying task and object representations throughout human cortex. Collectively these results provide important insights into the brain network that is involved in processing visual objects. We are currently conducting a large-scale study of both behavioral and neural responses to thousands of object images to better characterize the nature of the representations and the underlying neural processing. Elucidating how the brain enables us to recognize objects, scenes, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.

Project Start
Project End
Budget Start
Budget End
Support Year
11
Fiscal Year
2018
Total Cost
Indirect Cost
Name
U.S. National Institute of Mental Health
Department
Type
DUNS #
City
State
Country
Zip Code
Baker, Chris I; van Gerven, Marcel (2018) New advances in encoding and decoding of brain signals. Neuroimage 180:1-3
Hebart, Martin N; Bankson, Brett B; Harel, Assaf et al. (2018) The representational dynamics of task and object processing in humans. Elife 7:
Malcolm, George L; Silson, Edward H; Henry, Jennifer R et al. (2018) Transcranial Magnetic Stimulation to the Occipital Place Area Biases Gaze During Scene Viewing. Front Hum Neurosci 12:189
Bankson, B B; Hebart, M N; Groen, I I A et al. (2018) The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. Neuroimage 178:172-182
Silson, Edward H; Reynolds, Richard C; Kravitz, Dwight J et al. (2018) Differential Sampling of Visual Space in Ventral and Dorsal Early Visual Cortex. J Neurosci 38:2294-2303
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher et al. (2018) Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife 7:
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel et al. (2018) Bayesian population receptive field modelling. Neuroimage 180:173-187
Hebart, Martin N; Baker, Chris I (2018) Deconstructing multivariate decoding for the study of brain function. Neuroimage 180:4-18
Torrisi, Salvatore; Chen, Gang; Glen, Daniel et al. (2018) Statistical power comparisons at 3T and 7T with a GO / NOGO task. Neuroimage 175:100-110
Roth, Zvi N; Heeger, David J; Merriam, Elisha P (2018) Stimulus vignetting and orientation selectivity in human visual cortex. Elife 7:

Showing the most recent 10 out of 44 publications