Biologists and other human-health related scientists have been employing informatics approaches that integrate disparate data types (e.g. molecular, clinical) to make new discoveries about the biological basis of diseases, the treatment of diseases, and response to therapy. Human imaging is a rich source of phenotypic information that could be integrated with these other data, but they have been largely inaccessible to biologists for use in their investigations because the information contained within them is usually not quantitative. Making images and quantitative characterizations of visualized tissues available to the larger community holds great promise to accelerate research and discovery including the development of imaging biomarkers in cancer. The first critical step in the development and use of imaging biomarkers in cancer is the segmentation of the target lesions from their environments. Once the lesions have been segmented, one can computationally characterize many lesion image features for integration with other data types. To accelerate progress towards developing and optimizing algorithms for lesion segmentation and characterization, we will develop, deploy, and disseminate an informatics platform. The Cloud-based Image Biomarker Optimization Platform (C-BIBOP) will include 1) imaging data stored locally or accessed through curated repositories such as the Cancer Imaging Archive, 2) a set of segmentation and feature computation algorithms that can be run on these or newly uploaded data, 3) the outputs of lesion segmentation algorithms for these data, 4) the outputs of feature computation algorithms for these data, and 5) a set of metrics and visualization tools for the comparison of the performance of these algorithms, segmentations and features. Specifically, we will develop the C-BIBOP for the large-scale central analysis of multi-institutional quantitative image data by developing a cloud-based infrastructure to support customized computing environments, experiments that include images and associated meta-data, and a reporting module that performs comparisons, statistical analyses and visualizations of the results of segmentation and characterization. The basic infrastructure will be initially be populated with baseline algorithms, segmentations and image descriptors developed by Columbia, MGH, Moffitt, and Stanford (CMMS) investigators as well as limited datasets. We will deploy the C-BIBOP on a cloud platform, develop and share experiments consisting of data, algorithms and exploration of parameter spaces, and evaluate it at the participating institutions with state-of-the-art algorithms and well-curated datasets. Finally, we have identified a set of early adopters and beta-testers from within the Quantitative Imaging Network, and external collaborators and industrial partners who have indicated their willingness to contribute algorithms, data and results to C- BIBOP. We will host at least two permanent online collections of images and maintain the best segmentations and characterizations available that can be utilized by participants at anytime.
The Cloud-based Image Biomarker Optimization Platform (C-BIBOP), a technical resource for the larger cancer research community, will have broad impact. The C-BIBOP will enable developers of image processing algorithms to share and compare their lesion-segmentation and feature characterization algorithms on publicly available data, which will accelerate the availability of advanced, well-tested open-access algorithms. Biologists will be able to use these algorithms to integrate image phenotype data with molecular and clinical data to better understand the manifestations of cancer, and clinical researchers will be able to use them to derive robust image biomarkers of specific cancer types which will, in turn have utility for precision therapy and monitoring of response.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L et al. (2018) Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images. J Digit Imaging 31:403-414 |
Zhou, M; Scott, J; Chaudhury, B et al. (2018) Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches. AJNR Am J Neuroradiol 39:208-216 |
Paul, Rahul; Liu, Ying; Li, Qian et al. (2018) Representation of Deep Features using Radiologist defined Semantic Features. Proc Int Jt Conf Neural Netw 2018: |
Chang, Ken; Balachandar, Niranjan; Lam, Carson et al. (2018) Distributed deep learning networks among institutions for medical imaging. J Am Med Inform Assoc 25:945-954 |
Alahmari, Saeed S; Cherezov, Dmitry; Goldgof, Dmitry et al. (2018) Delta Radiomics Improves Pulmonary Nodule Malignancy Prediction in Lung Cancer Screening. IEEE Access 6:77796-77806 |
Paul, Rahul; Hall, Lawrence; Goldgof, Dmitry et al. (2018) Predicting Nodule Malignancy using a CNN Ensemble Approach. Proc Int Jt Conf Neural Netw 2018: |
Balagurunathan, Yoganand; Beers, Andrew; Kalpathy-Cramer, Jayashree et al. (2018) Semi-automated pulmonary nodule interval segmentation using the NLST data. Med Phys 45:1093-1107 |
Newitt, David C; Malyarenko, Dariya; Chenevert, Thomas L et al. (2018) Multisite concordance of apparent diffusion coefficient measurements across the NCI Quantitative Imaging Network. J Med Imaging (Bellingham) 5:011003 |
Paul, Rahul; Hawkins, Samuel H; Schabath, Matthew B et al. (2018) Predicting malignant nodules by fusing deep features with classical radiomics features. J Med Imaging (Bellingham) 5:011021 |
Elhalawani, Hesham; Lin, Timothy A; Volpe, Stefania et al. (2018) Machine Learning Applications in Head and Neck Radiation Oncology: Lessons From Open-Source Radiomics Challenges. Front Oncol 8:294 |
Showing the most recent 10 out of 22 publications