Mental imagery is a salient part of mental awareness but very little is understood about how visual percepts are generated without retinal input, or how visual features that are known to be an important part of visual representation drive neural activity during mental imagery. Our long-term goal is to provide clinicians with the ability to objectively interpret mental images by accessing underlying neural activity. The objective of the current work is to develop a basic understanding of the similarities and differences between the representation of visual features in veridical and mental images. Our central hypothesis is that the mechanisms for representing visual features during perception are fundamentally conserved during mental imagery and that receptive fields that link activity to veridical images should predict activity evoked by mental imagery. Nonetheless, mental images are clearly distinguishable from veridical images and we consider three potential sources of difference: (1) The potential for exaggerated effects of attention on mental imagery;(2) The predominate influence of feedback connections from high-level visual areas with large receptive fields (relative to the retina) during mental imagery;(3) Differences between the neural processes of generating mental images and the physical processes that generate retinal images.
Two Specific Aims are proposed that will be pursued using an innovative new approach for analyzing functional MRI signals that is based upon voxel-wise modeling of receptive fields. Under this approach, a separate predictive model is constructed for each and every voxel in the acquired volumes. The model links activity measured in a voxel directly to specific visual features, including spatial frequency, orientation, object category, and object location. The models can then be used to decode perceived or recalled scenes from measured brain activity. We expect that our contribution will be an advance in our understanding of the specific factors that determine the degree of consistency between activity during imagery and perception, as well as a significant advance in our ability to quantitatively model the high-level visual areas where activity is most consistent. This contribution will be significant because it will take us several necessary steps toward the development of imagery receptive fields-predictive receptive field models that explain how the visual features in a scene drive activity when the scene is recalled in the form of a mental image. A receptive field model for mental imagery would place within reach a decoding algorithm for objectively interpreting and even pictorially reconstructing mental images.

Public Health Relevance

The proposed research is relevant to public health because it will advance our understanding of the neural mechanisms responsible for visual mental imagery, a key cognitive process that is critical for visual function and mental health.

National Institute of Health (NIH)
National Eye Institute (NEI)
Research Project (R01)
Project #
Application #
Study Section
Program Officer
Araj, Houmam H
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Medical University of South Carolina
Schools of Medicine
United States
Zip Code
Naselaris, Thomas; Bassett, Danielle S; Fletcher, Alyson K et al. (2018) Cognitive Computational Neuroscience: A New Conference for an Emerging Discipline. Trends Cogn Sci 22:365-367
St-Yves, Ghislain; Naselaris, Thomas (2018) The feature-weighted receptive field: an interpretable encoding model for complex feature spaces. Neuroimage 180:188-202
Naselaris, Thomas; Kay, Kendrick N (2015) Resolving Ambiguities of MVPA Using Explicit Models of Representation. Trends Cogn Sci 19:551-554
Naselaris, Thomas; Olman, Cheryl A; Stansbury, Dustin E et al. (2015) A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes. Neuroimage 105:215-28
Pearson, Joel; Naselaris, Thomas; Holmes, Emily A et al. (2015) Mental Imagery: Functional Mechanisms and Clinical Applications. Trends Cogn Sci 19:590-602