The goal of this proposal is to test several hypotheses about how structural and semantic information is represented in human visual cortex, and how these representations are modulated by attention. The proposal rests on a key technical innovation, a nonlinear system identification framework for estimating quantitative voxel-based receptive field (VRF) models from functional MRI data. These VRF models embody specific hypotheses about visual representation, and they provide clear predictions that can be tested and evaluated.
In Aim 1 we propose to investigate the representation of structural information in several retinotopically-organized visual areas (i.e., V1, V2, V3, V4 and lateral occipital). To address this issue we will compare several potential VRF models that encode different sorts of information about shape, motion and color.
In Aim 2 we propose to investigate semantic representation in non-retinotopic visual cortex anterior to lateral occipital. To accomplish this we will explore a range of semantic encoding models that describe how each voxel represents the semantic content of natural images (e.g., whether an image is an indoor or outdoor scene, or whether it contains faces, etc.). We will use these semantic VRF models to investigate functional regions-of-interest proposed in previous studies (e.g., the fusiform face area FFA), and to characterize non-retinotopic cortex whose function is currently unknown.
In Aim 3 we propose to examine how spatial and feature- based attention can affect the way structural and semantic information are represented. We will characterize attentional modulation in terms of its effects on response gain, orientation and spatial frequency tuning, and semantic tuning. These experiments will provide new insights about visual representations, and will produce new computational encoding models that accurately predict how visual cortex responds during natural vision.
Disorders of central vision can severely affect quality of life. The design treatments and devices for improving visual function will depend critically on the availability of computational algorithms that accurately describe and predict visual function under natural viewing conditions. This proposal aims to develop quantitative predictive models for visual areas that represent structural and semantic visual information.
Showing the most recent 10 out of 18 publications