Accurate segmentation of the prostate is important in compensating for daily prostate motion during image- guided radiation therapy. It is also important for adaptive radiation therapy in order to maximize dose to the tumor and minimize dose to healthy tissue. The goal of this project is to develop a novel method for online learning of patient-specific appearance and shape deformation information to significantly improve prostate segmentation from daily CT images. Our first two specific aims focus on developing an online-learning method for progressively building the patient-specific appearance and shape deformation models from the subsequently acquired treatment images of the same patient, to guide more accurate segmentation of the prostate. The population-based appearance and shape deformation models are not specific to the patient under study, and therefore they are used only for prostate segmentation in early treatment days. Once patient-specific information has been collected online from a sufficient number of treatment images, it starts to replace the population-based information in the segmentation process. In addition, the limitation of requiring strong point-to-point correspondence in the conventional model-based methods will be effectively solved by innovatively formulating the appearance matching in these methods as a new registration problem, thus significantly improving the flexibility and eventually the accuracy of prostate segmentation. Our third specific aim is to rapidly register the segmented prostates in the planning image and each treatment image of a patient, by online learning the patient-specific correlations between the deformations of prostate boundaries and internal regions. This will allow for fast warping of the treatment plan from the planning image space to the treatment image space for adaptive radiotherapy, and will also allow for the dosimetric evaluation of radiotherapy. Our fourth specific aim is to evaluate the proposed prostate segmentation and registration algorithms by using both physical phantom and real patient data, and to compare its performance with existing prostate segmentation algorithms. With successful development of these potentially more accurate segmentation and fast registration methods, the effectiveness of radiotherapy for cancer treatment will be highly improved. To benefit the research community, the final developed method in this project will also be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.

Public Health Relevance

Description of Project This project aims at developing a novel method for online learning of patient-specific appearance and shape deformation information, as a way to significantly improve prostate segmentation and registration from daily CT images of a patient during image- guided radiation therapy. The final developed methods, once validated, will be incorporated into PLanUNC, a full- featured, fully documented, open-source treatment planning system developed at UNC, and will be made freely available to the public.

Agency
National Institute of Health (NIH)
Institute
National Cancer Institute (NCI)
Type
Research Project (R01)
Project #
5R01CA140413-05
Application #
8599312
Study Section
Biomedical Imaging Technology Study Section (BMIT)
Program Officer
Deye, James
Project Start
2010-07-06
Project End
2014-12-31
Budget Start
2014-03-05
Budget End
2014-12-31
Support Year
5
Fiscal Year
2014
Total Cost
$301,612
Indirect Cost
$97,820
Name
University of North Carolina Chapel Hill
Department
Radiation-Diagnostic/Oncology
Type
Schools of Medicine
DUNS #
608195277
City
Chapel Hill
State
NC
Country
United States
Zip Code
27599
Wu, Guorong; Peng, Xuewei; Ying, Shihui et al. (2016) eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration. PLoS One 11:e0146870
Gao, Yaozong; Shao, Yeqin; Lian, Jun et al. (2016) Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE Trans Med Imaging 35:1532-43
Shi, Yinghuan; Gao, Yaozong; Liao, Shu et al. (2016) A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 173:317-331
Huynh, Tri; Gao, Yaozong; Kang, Jiayin et al. (2016) Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE Trans Med Imaging 35:174-83
Guo, Yanrong; Gao, Yaozong; Shen, Dinggang (2016) Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans Med Imaging 35:1077-89
Shao, Yeqin; Gao, Yaozong; Wang, Qian et al. (2015) Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images. Med Image Anal 26:345-56
Wang, Qian; Lu, Le; Wu, Dijia et al. (2015) Automatic Segmentation of Spinal Canals in CT Images via Iterative Topology Refinement. IEEE Trans Med Imaging 34:1694-704
Shi, Yinghuan; Gao, Yaozong; Liao, Shu et al. (2015) Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE Trans Pattern Anal Mach Intell 37:2286-303
Gao, Yaozong; Shen, Dinggang (2015) Collaborative regression-based anatomical landmark detection. Phys Med Biol 60:9377-401
Dai, Xiubin; Gao, Yaozong; Shen, Dinggang (2015) Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 42:2594-606

Showing the most recent 10 out of 42 publications