The goal of this proposal is to test several hypotheses about how structural and semantic information is represented in human visual cortex, and how these representations are modulated by attention. The proposal rests on a key technical innovation, a nonlinear system identification framework for estimating quantitative voxel-based receptive field (VRF) models from functional MRI data. These VRF models embody specific hypotheses about visual representation, and they provide clear predictions that can be tested and evaluated.
In Aim 1 we propose to investigate the representation of structural information in several retinotopically-organized visual areas (i.e., V1, V2, V3, V4 and lateral occipital). To address this issue we will compare several potential VRF models that encode different sorts of information about shape, motion and color.
In Aim 2 we propose to investigate semantic representation in non-retinotopic visual cortex anterior to lateral occipital. To accomplish this we will explore a range of semantic encoding models that describe how each voxel represents the semantic content of natural images (e.g., whether an image is an indoor or outdoor scene, or whether it contains faces, etc.). We will use these semantic VRF models to investigate functional regions-of-interest proposed in previous studies (e.g., the fusiform face area FFA), and to characterize non-retinotopic cortex whose function is currently unknown.
In Aim 3 we propose to examine how spatial and feature- based attention can affect the way structural and semantic information are represented. We will characterize attentional modulation in terms of its effects on response gain, orientation and spatial frequency tuning, and semantic tuning. These experiments will provide new insights about visual representations, and will produce new computational encoding models that accurately predict how visual cortex responds during natural vision.
Disorders of central vision can severely affect quality of life. The design treatments and devices for improving visual function will depend critically on the availability of computational algorithms that accurately describe and predict visual function under natural viewing conditions. This proposal aims to develop quantitative predictive models for visual areas that represent structural and semantic visual information.
|de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L et al. (2017) The Hierarchical Cortical Organization of Human Speech Processing. J Neurosci 37:6539-6557|
|Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y et al. (2017) Eye movement-invariant representations in the human visual system. J Vis 17:11|
|Huth, Alexander G; Lee, Tyler; Nishimoto, Shinji et al. (2016) Decoding the Semantic Content of Natural Movies from Human Brain Activity. Front Syst Neurosci 10:81|
|Huth, Alexander G; de Heer, Wendy A; Griffiths, Thomas L et al. (2016) Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532:453-8|
|Çukur, Tolga; Huth, Alexander G; Nishimoto, Shinji et al. (2016) Functional Subdomains within Scene-Selective Cortex: Parahippocampal Place Area, Retrosplenial Complex, and Occipital Place Area. J Neurosci 36:10257-10273|
|Lescroart, Mark D; Stansbury, Dustin E; Gallant, Jack L (2015) Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas. Front Comput Neurosci 9:135|
|Gao, James S; Huth, Alexander G; Lescroart, Mark D et al. (2015) Pycortex: an interactive surface visualizer for fMRI. Front Neuroinform 9:23|
|Vu, An T; Gallant, Jack L (2015) Using a novel source-localized phase regressor technique for evaluation of the vascular contribution to semantic category area localization in BOLD fMRI. Front Neurosci 9:411|
|Naselaris, Thomas; Olman, Cheryl A; Stansbury, Dustin E et al. (2015) A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105:215-28|
|Çukur, Tolga; Huth, Alexander G; Nishimoto, Shinji et al. (2013) Functional subdomains within human FFA. J Neurosci 33:16748-66|
Showing the most recent 10 out of 18 publications