Accurate segmentation of the prostate is important in compensating for daily prostate motion during image- guided radiation therapy. It is also important for adaptive radiation therapy in order to maximize dose to the tumor and minimize dose to healthy tissue. The goal of this project is to develop a novel method for online learning of patient-specific appearance and shape deformation information to significantly improve prostate segmentation from daily CT images. Our first two specific aims focus on developing an online-learning method for progressively building the patient-specific appearance and shape deformation models from the subsequently acquired treatment images of the same patient, to guide more accurate segmentation of the prostate. The population-based appearance and shape deformation models are not specific to the patient under study, and therefore they are used only for prostate segmentation in early treatment days. Once patient-specific information has been collected online from a sufficient number of treatment images, it starts to replace the population-based information in the segmentation process. In addition, the limitation of requiring strong point-to-point correspondence in the conventional model-based methods will be effectively solved by innovatively formulating the appearance matching in these methods as a new registration problem, thus significantly improving the flexibility and eventually the accuracy of prostate segmentation. Our third specific aim is to rapidly register the segmented prostates in the planning image and each treatment image of a patient, by online learning the patient-specific correlations between the deformations of prostate boundaries and internal regions. This will allow for fast warping of the treatment plan from the planning image space to the treatment image space for adaptive radiotherapy, and will also allow for the dosimetric evaluation of radiotherapy. Our fourth specific aim is to evaluate the proposed prostate segmentation and registration algorithms by using both physical phantom and real patient data, and to compare its performance with existing prostate segmentation algorithms. With successful development of these potentially more accurate segmentation and fast registration methods, the effectiveness of radiotherapy for cancer treatment will be highly improved. To benefit the research community, the final developed method in this project will also be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.

Public Health Relevance

Description of Project This project aims at developing a novel method for online learning of patient-specific appearance and shape deformation information, as a way to significantly improve prostate segmentation and registration from daily CT images of a patient during image- guided radiation therapy. The final developed methods, once validated, will be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.

Agency
National Institute of Health (NIH)
Institute
National Cancer Institute (NCI)
Type
Research Project (R01)
Project #
5R01CA140413-04
Application #
8403568
Study Section
Biomedical Imaging Technology Study Section (BMIT)
Program Officer
Deye, James
Project Start
2010-07-06
Project End
2014-12-31
Budget Start
2013-01-01
Budget End
2013-12-31
Support Year
4
Fiscal Year
2013
Total Cost
$315,017
Indirect Cost
$102,168
Name
University of North Carolina Chapel Hill
Department
Radiation-Diagnostic/Oncology
Type
Schools of Medicine
DUNS #
608195277
City
Chapel Hill
State
NC
Country
United States
Zip Code
27599
Zhang, Jun; Liu, Mingxia; Shen, Dinggang (2017) Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks. IEEE Trans Image Process 26:4753-4764
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan et al. (2017) Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning. Med Image Anal 39:218-230
Dong, Pei; Wang, Li; Lin, Weili et al. (2017) Scalable Joint Segmentation and Registration Framework for Infant Brain Images. Neurocomputing 229:54-62
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang et al. (2017) Robust multi-atlas label propagation by deep sparse representation. Pattern Recognit 63:511-517
Gao, Yaozong; Shao, Yeqin; Lian, Jun et al. (2016) Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE Trans Med Imaging 35:1532-43
Wu, Guorong; Peng, Xuewei; Ying, Shihui et al. (2016) eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration. PLoS One 11:e0146870
Shi, Yinghuan; Gao, Yaozong; Liao, Shu et al. (2016) A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 173:317-331
Park, Sang Hyun; Gao, Yaozong; Shen, Dinggang (2016) Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion. IEEE Trans Biomed Eng 63:1208-1219
Huynh, Tri; Gao, Yaozong; Kang, Jiayin et al. (2016) Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE Trans Med Imaging 35:174-83
Guo, Yanrong; Gao, Yaozong; Shen, Dinggang (2016) Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans Med Imaging 35:1077-89

Showing the most recent 10 out of 47 publications