Today, many areas of science and engineering face increased challenges in synthesizing information from the ever-growing amount of data available. Such challenges are even more complex when the data type and format vary, as is the case for three-dimensional objects. Such objects can be described by different representations (modalities): shapes, images, and texts. Recently, deep learning methods were shown to be effective processing techniques by exposing object relationships without relying on hard-coded metrics. However, such methods focus on single modalities. This project seeks to design and develop a model that unifies multiple types of data such as three-dimensional shapes, images and text in a single quantitative model. Following a rigorous approach, the team of researchers from Wayne State University will map the multimodal and heterogeneous representations and features onto a universal high-dimensional encoding space, characterized by uniform representation and metric. The team will then validate the work by applying the research results to MICRO Magnetic Resonance Imaging (MICRO-MRI) microvascular data collected in collaboration with area health science professionals. The project bridges a significant gap in neuroscience data analysis and will produce a cyberinfrastructure framework that will stimulate research in the field. The project will also provide educational activities for undergraduate and graduate students, as well as outreach to local middle school students. This project serves the national interest, as stated by NSF's mission: to promote the progress of science; to advance the national health, prosperity and welfare.
The research goal of this proposal centers around the unified theoretical multimodality data-driven joint embedding framework and involves design of a high-dimensional multimodal feature vector, probability-based joint embedding, and deep neural networks, hence making it possible to effectively represent and process the large-scale microvascular networks from a brand-new perspective. The proposed computational realization of deep neural networks can transform a three-dimensional shape with heterogeneous imaging, textual, and other features obtained from a large dataset to a novel high-dimensional isometric multi-view (shape, image, and text) probability space. The proposed joint embedding space preserves all intrinsic geometric, imaging, and textual characteristics and has the capability to integrate other multimodality properties. The generalized joint embedding space through the unified metric vector field allows formal and diverse study of geometry scalability and variability in shape processing and measurement intensively involved in 3D multimodal data informatics. In the proposed joint embedding space, the global and local shape comparison and analysis can be easily computed and measured by using the unified metric, which will significantly increase system's automation, reduce human's interventions, and discover new knowledge in vascular diseases.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.