Methods known as 'multivariate pattern' (MVP) analysis can be used to decode the information patterns in brain activity obtained using functional magnetic resonance imaging (fMRI). However, a new decoding model has to be built for each brain, because two brains (and the representational spaces they employ) are difficult to align at a fine spatial scale. As a consequence, we do not yet know if different brains use the same codes or idiosyncratic codes to represent the same things. With funding from the National Science Foundation, Drs. James V. Haxby of Dartmouth College, and Peter J. Ramadge of Princeton University, in collaboration with Michael Hanke of the University of Magdeburg (Germany), are developing new methods to discover a coding scheme that works accurately across different brains. The methods being developed align brain activity across brains by projecting individual brain data into a common, high-dimensional space. This approach allows the researchers to build models of brain representational spaces for different cortical areas that are valid both across brains and across a wide range of stimuli and cognitive states. The researchers are developing two algorithms. One is referred to as 'hyperalignment' and the other as 'functional connectivity hyperalignment.' Hyperalignment rotates the voxel spaces (i.e., the smallest units in a brain image) of individual brains into a single high-dimensional space, in which each dimension is a profile of differential responses to stimuli, that is common across brains. Functional connectivity hyperalignment aligns voxel spaces based on the functional connectivity profile (i.e., relationships among activated brain areas) for each cortical location. Functional connectivity profiles allow for models of areas that do not respond to external stimuli in a consistent manner, for example, those areas in the so-called 'default-intrinsic system' that plays a central role in social cognition. The investigators are an interdisciplinary partnership - cognitive neuroscientists and signal-processing engineers - who have been working together successfully for several years.

Developing the computational methods to build common models of representational spaces will augment the power of brain activity decoding techniques, making it possible to investigate how finer, more detailed information is embedded in brain activity patterns, and to read out that information from functional brain imaging data. The proposed methods also will allow extension of brain decoding to the neural codes that underlie social cognition, that is, the representation of knowledge about the personal traits and mental states of others. These models also will allow investigation of how neural coding is altered within brain regions that are affected by experience, by development, and by psychopathology.

This project is jointly funded by Collaborative Research in Computational Neuroscience and the Office of International Science and Engineering. A companion project is being funded by the German Ministry of Education and Research (BMBF).

Project Report

Decoding brain activity patterns that are measured with functional brain imaging methods, such as functional magnetic resonance imaging (fMRI), has made major advances in understanding the link between those patterns an the information that is represented in perception, memory, language, and thought. Individual brains differ greatly, however, in terms of functional anatomy. Consequently, prior to our work, most decoding analyses built a new decoding engine tailored to each individual brain, leaving unanswered a key question: Do different brains encode information using the same code or does each brain develop an idiosyncratic code? With support from NSF, we have developed an algorithm, "hyperalignment", that solves this problem. We derive a transformation for each individual brain that puts the functional architecture of that brain into a "common model space" that is based on a neural code that is common across brains. The derivation of the transformation is based on brain activity measured in response to natural, rich, dynamic stimuli – action movies, dramas, natural music – making the transformations valid across a broad range of stimuli and stimulus domains. We developed two versions of hyperalignment that transform the whole cortex into a common model space. The results (Figure 1) show that the method produces valid models across many cortical fields that represent diverse information domains – high-level vision for objects, animals, and actions, auditory perception, action execution, social cognition. After individual data are transformed into the common model space, we show that one subject’s brain activity patterns can be decoded based on similarity to brain activity patterns in other brains at levels that exceed decoding based on that subject’s own data. The common model of representational spaces in human cortex offers a structure for a radically new kind of functional brain atlas. That would afford comparison of functional imaging research data across studies and research groups at a fine-grained level that current anatomy-based atlases are blind to. The common model also may provide new tools for more powerful assessment of individual differences, affording investigations of how neural representation changes as a function of development, education, genetics, and clinical disorder.

Project Start
Project End
Budget Start
2011-09-15
Budget End
2014-08-31
Support Year
Fiscal Year
2011
Total Cost
$472,420
Indirect Cost
Name
Dartmouth College
Department
Type
DUNS #
City
Hanover
State
NH
Country
United States
Zip Code
03755