Prostate cancer is one of the most commonly occurring forms of cancer, accounting for 21% of all cancer in men. The Prostate Imaging Reporting and Data System (PI-RADS) aims to standardize reporting of prostate cancer using multi-parametric magnetic resonance imaging (mpMRI). However, the in-depth analysis, as demanded by PI-RADS, remains challenging due to the complexity and heterogeneity of the disease, and it is a clinically burdensome task subject to both signi?cant intra- and inter-reader variability. Auxiliary tools based on machine learning methods such as deep learning can reduce diagnostic variability and increase workload ef?ciency by automatically performing tasks and presenting results to a radiologist for the purpose of decision support. In particular, automated identi?cation and classi?cation of lesion candidates using imaging data can be performed with respect to PI-RADS scoring. In Phase I of this project, we developed two automated methods to reduce the intra- and inter-observer variability while interpreting mpMRI images using the PI-RADS protocol: (i) a method to co-register mpMRI data, and (ii) a method to geometrically segment the prostate gland into the PI-RADS protocol sector map. The overarching goal of this Phase II project is to develop machine learning algorithms that incorporate both co-registered multi-modal imaging biomarkers and PI-RADS sector map information into an automated clinical diagnostic aid. The innovation in this project lies in the use of deep learning to automatically predict PI-RADS classi?cation. This project is signi?cant in that it has the potential to improve clinical ef?ciency and reduce diagnostic variation in prostate cancer diagnosis.
In Aim 1 of this project, we will develop a deep learning approach to localize and classify lesions in mpMRI.
In Aim 2, we will integrate this diagnostic tool into the ProFuseCAD system and perform rigorous multi-site validation to quantify PI-RADS classi?cation performance.
Both aims will utilize a database of over 1,000 existing mpMRI images from multiple clinical sites to develop and validate the algorithms. Ultimately, enhancements from this project will create a novel feature for Eigen's (the applicant company's) FDA 510(k)-cleared imaging product, ProFuseCAD, in order to improve the diagnosis and reporting of prostate cancer.
Radiological interpretation of multimodal prostate imaging data is challenging and subject to high levels of vari- ability. To address this problem, auxiliary tools based on machine learning methods such as deep learning can increase workload ef?ciency by automatically performing tasks and presenting results to a radiologist for the purpose of decision support. In particular, automated identi?cation of lesion candidates and assessment of po- tentially benign or malignant lesions with respect to speci?c PI-RADS categories from clinical imaging data can improve prostate cancer reporting and reduce variation in radiological interpretation.