The goal of this research is to understand how we see what we see: how does the brain analyze the light falling on the retina of the eye to encode a world full of objects, people and things? During the past year we have continued to investigate 1) the interaction between bottom up (sensory driven) and top-down (internally driven) processing in the brain, focusing on the impact of task or behavioral goals, and 2) perception of complex visual stimuli, focusing most recently on visual scenes. 1) Interaction between bottom up and top down processing Our visual perception is the product of an interaction between bottom-up sensory information and top-down, internally generated signals guiding interpretation of the input and reflecting our prior knowledge and intent. Different tasks require different types of visual information to be extracted from visual stimuli, depending on the behavioral goals of the observer. We have been investigating how the representations of complex visual stimuli vary according to the task a participant is performing. We presented participants with images of every day objects (e.g. cow, tree, motorbike) and asked them to respond to simple questions sucg as whether the object is big or small?, manmade or natural? (Harel et al, 2014, Proceedings of the National Academy of Sciences). First, we found that we could decode the task was performing on a given visual object from activity patterns in multiple regions throughout the brain. Further, we found a strong distinction between tasks that emphasized physical properties of the visual stimuli (e.g. color) and tasks that emphasized conceptual properties (e.g. real-world size). Second, we are now extending this work to visual scenes. Previously, we found that scene representations in a region of the brain thought to be critical for scene recognition primarily reflect the spatial properties of scenes (e.g. whether they are open or closed) and not the semantic properties (i.e. scene category, such as office or beach). In our current work, we are investigating how these representations change according to task, by asking participants to focus on particular aspects of the scenes presented. 2) Perception of real world scenes Real-world scenes are incredibly complex and heterogeneous, yet we are able to identify and categorize them effortlessly. While prior studies have identified several brain regions that appear to be specialized for scene processing, it remains unclear exactly what the precise roles of these different regions are. Building on a general framework for visual processing in the brain we recently proposed, we are currently investigating the extent to which basic properties of these regions (e.g. preference for particular parts of the visual field, size of receptive fields) account for their role in scene processing and how these properties relate to behavioral assessments of visual scenes. Elucidating how the brain enables us to recognize objects, scenes, faces and bodies provides important insights into the nature of our internal representations of the world around us. Understanding these representations is vital in trying to determine the underlying deficits in many mental health and neurological disorders.

Project Start
Project End
Budget Start
Budget End
Support Year
7
Fiscal Year
2014
Total Cost
Indirect Cost
Name
U.S. National Institute of Mental Health
Department
Type
DUNS #
City
State
Country
Zip Code
Torrisi, Salvatore; Chen, Gang; Glen, Daniel et al. (2018) Statistical power comparisons at 3T and 7T with a GO / NOGO task. Neuroimage 175:100-110
Roth, Zvi N; Heeger, David J; Merriam, Elisha P (2018) Stimulus vignetting and orientation selectivity in human visual cortex. Elife 7:
Baker, Chris I; van Gerven, Marcel (2018) New advances in encoding and decoding of brain signals. Neuroimage 180:1-3
Hebart, Martin N; Bankson, Brett B; Harel, Assaf et al. (2018) The representational dynamics of task and object processing in humans. Elife 7:
Malcolm, George L; Silson, Edward H; Henry, Jennifer R et al. (2018) Transcranial Magnetic Stimulation to the Occipital Place Area Biases Gaze During Scene Viewing. Front Hum Neurosci 12:189
Bankson, B B; Hebart, M N; Groen, I I A et al. (2018) The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks. Neuroimage 178:172-182
Silson, Edward H; Reynolds, Richard C; Kravitz, Dwight J et al. (2018) Differential Sampling of Visual Space in Ventral and Dorsal Early Visual Cortex. J Neurosci 38:2294-2303
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher et al. (2018) Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife 7:
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel et al. (2018) Bayesian population receptive field modelling. Neuroimage 180:173-187
Hebart, Martin N; Baker, Chris I (2018) Deconstructing multivariate decoding for the study of brain function. Neuroimage 180:4-18

Showing the most recent 10 out of 44 publications