The brain is extremely complex as we know, involving a complicated interplay between functional information interacting with a structural (but not static) substrate. Brain imaging technology provides a way to sample various aspects of the brain albeit incompletely, providing a rich set of multitask and multimodal information. The field has advanced significantly in its approach to multimodal data, as there are more studies correlating, e.g. func- tional and structural measures. However the vast majority of studies still ignore the joint information among two or more modalities or tasks. Such information is critical to consider as each brain imaging modality reports on a different aspect of the brain (e.g. gray matter integrity, blood flow changes, white matter integrity). The field is still striving to understand how to diagnose and treat complex mental illness, such as schizophrenia, bipolar disorder, depression, and others, and ignoring the joint information among tasks and modalities is to miss a critical, but available, part of the puzzle. Combining multimodal imaging data is not easy since, among other reasons, the combination of multiple data sets consisting of thousands of voxels or timepoints yields a very high dimensional problem, requiring appropriate data reduction strategies. In the previous phase of the project we developed approaches based on multiset canonical correlation analysis (mCCA) and joint independent compo- nent analysis (jICA) that can capture high-dimensional, linear, relationships among 2 or more modalities, and which we showed can identify both modality-unique and modality-common features that are predictive of dis- ease. In this new phase of the project we will focus on two important areas. First, we will build on our previous success by extending our models to allow for incorporation of behavioral/cognitive constraints as well as devel- oping new approaches which leverage recent advances in deep learning enabling us to capture higher order relationships embedded in multimodal and multitask data. Secondly, we will address the key challenge of inte- grating possibly thousands of multimodal features by developing a new meta-modality framework which will enable us to bring together the existing and new features in an intuitive manner. This will also enable us to capture changes in multimodal information which might not be harmful separately but which together are jointly sufficient to convey risk of illness or to identify information flow through the meta-modal space for developing potential targets for treatment. We will apply these approaches to one of the largest multimodal imaging datasets of psychosis and mood disorders. Our proposed approach will be thoroughly evaluated using this large data set which includes multiple illnesses that have overlapping symptoms and which can sometimes be misdiagnosed and treated with the wrong medications for months or years (schizophrenia, bipolar disorder, and unipolar de- pression). As before, we will provide open source tools and release data throughout the duration of the project via a web portal and the NITRIC repository, hence enabling other investigators to compare their own methods with our own as well as to apply them to a large variety of brain disorders. 36
The promise of multimodal imaging is clear, and we have shown the power of linear joint N-way analysis during the previous funding period. However this is just the beginning. In this renewal, we will build on and significantly expand the goals of the original aims by incorporating additional joint information (including dynamic and potentially nonlinear factors) as well as a framework for integrating the resulting information in order to enable decision making and identification of potential targets for further study or possible treatment. We will also disseminate our approaches through software tools and interactive web-based visualization of available data. 37
Showing the most recent 10 out of 202 publications