Accurate segmentation of the prostate is important in compensating for daily prostate motion during image- guided radiation therapy. It is also important for adaptive radiation therapy in order to maximize dose to the tumor and minimize dose to healthy tissue. The goal of this project is to develop a novel method for online learning of patient-specific appearance and shape deformation information to significantly improve prostate segmentation from daily CT images. Our first two specific aims focus on developing an online-learning method for progressively building the patient-specific appearance and shape deformation models from the subsequently acquired treatment images of the same patient, to guide more accurate segmentation of the prostate. The population-based appearance and shape deformation models are not specific to the patient under study, and therefore they are used only for prostate segmentation in early treatment days. Once patient-specific information has been collected online from a sufficient number of treatment images, it starts to replace the population-based information in the segmentation process. In addition, the limitation of requiring strong point-to-point correspondence in the conventional model-based methods will be effectively solved by innovatively formulating the appearance matching in these methods as a new registration problem, thus significantly improving the flexibility and eventually the accuracy of prostate segmentation. Our third specific aim is to rapidly register the segmented prostates in the planning image and each treatment image of a patient, by online learning the patient-specific correlations between the deformations of prostate boundaries and internal regions. This will allow for fast warping of the treatment plan from the planning image space to the treatment image space for adaptive radiotherapy, and will also allow for the dosimetric evaluation of radiotherapy. Our fourth specific aim is to evaluate the proposed prostate segmentation and registration algorithms by using both physical phantom and real patient data, and to compare its performance with existing prostate segmentation algorithms. With successful development of these potentially more accurate segmentation and fast registration methods, the effectiveness of radiotherapy for cancer treatment will be highly improved. To benefit the research community, the final developed method in this project will also be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.
Description of Project This project aims at developing a novel method for online learning of patient-specific appearance and shape deformation information, as a way to significantly improve prostate segmentation and registration from daily CT images of a patient during image- guided radiation therapy. The final developed methods, once validated, will be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.
|Zhang, Jun; Liu, Mingxia; Shen, Dinggang (2017) Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks. IEEE Trans Image Process 26:4753-4764|
|Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang et al. (2017) Robust multi-atlas label propagation by deep sparse representation. Pattern Recognit 63:511-517|
|Gao, Yaozong; Shao, Yeqin; Lian, Jun et al. (2016) Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE Trans Med Imaging 35:1532-43|
|Park, Sang Hyun; Gao, Yaozong; Shen, Dinggang (2016) Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion. IEEE Trans Biomed Eng 63:1208-1219|
|Shi, Yinghuan; Gao, Yaozong; Liao, Shu et al. (2016) A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 173:317-331|
|Wu, Guorong; Peng, Xuewei; Ying, Shihui et al. (2016) eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration. PLoS One 11:e0146870|
|Huynh, Tri; Gao, Yaozong; Kang, Jiayin et al. (2016) Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE Trans Med Imaging 35:174-83|
|Guo, Yanrong; Gao, Yaozong; Shen, Dinggang (2016) Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans Med Imaging 35:1077-89|
|Wang, Qian; Lu, Le; Wu, Dijia et al. (2015) Automatic Segmentation of Spinal Canals in CT Images via Iterative Topology Refinement. IEEE Trans Med Imaging 34:1694-704|
|Shi, Yinghuan; Gao, Yaozong; Liao, Shu et al. (2015) Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE Trans Pattern Anal Mach Intell 37:2286-303|
Showing the most recent 10 out of 44 publications