The first two years of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately characterize structure changes is very critical for the exploration of early brain development and early detection of neurodevelopmental disorder in imaging-based studies, which highly relies on image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters more challenges than adult brains due to the dramatic appearance change and rapid brain development. Fortunately, image segmentation and registration can assist each other to overcome the difficulties by using the growth trajectories (temporal correspondences) learned from a large amount of complete longitudinal data (at 2 weeks, 3 months, 6 months, 9 months, 1 year and 2 years of age) with multi-modality images (T1, T2, and DTI) collected in UNC-CH. Specifically, we will develop a joint segmentation and registration framework to determine the tissue type for each image point and simultaneously find the deformation pathway between any two infant brain images with significant age gap (Aim 1). Preliminary results demonstrate significant benefits of this approach. After comprehensively evaluating its performance on a large number of infant data, we will package our joint segmentation and registration approach into a software package and release it freely to the community (Aim 2), as we have done with our other software packages that have been downloaded for more than 10,000 times. Considering the importance of image segmentation and registration in computational anatomy area, this cutting-edge technique will be also very useful for many ongoing early brain development studies.

Public Health Relevance

This proposal aims to develop an efficient computational anatomy approach to deal with the difficult tissue segmentation and registration of infant brain images in the first years of life. Specifically, to overcome the issues of dynamic appearance changes and spatially-varied development, we propose a joint image segmentation and registration framework to simultaneously determine the tissue type in each image point and further find the deformation pathway between any two infant images at different development stages. Considering the importance of image segmentation and registration in computational anatomy area, this cutting-edge technique will be also very useful for many ongoing early brain development studies.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Exploratory/Developmental Grants (R21)
Project #
5R21HD081467-02
Application #
9038395
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Freund, Lisa S
Project Start
2015-04-01
Project End
2017-03-31
Budget Start
2016-04-01
Budget End
2017-03-31
Support Year
2
Fiscal Year
2016
Total Cost
Indirect Cost
Name
University of North Carolina Chapel Hill
Department
Radiation-Diagnostic/Oncology
Type
Schools of Medicine
DUNS #
608195277
City
Chapel Hill
State
NC
Country
United States
Zip Code
27599
Hu, Shunbo; Wei, Lifang; Gao, Yaozong et al. (2017) Learning-based deformable image registration for infant MR images in the first year of life. Med Phys 44:158-170
Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan et al. (2017) Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning. Med Image Anal 39:218-230
Zu, Chen; Wang, Zhengxia; Zhang, Daoqiang et al. (2017) Robust multi-atlas label propagation by deep sparse representation. Pattern Recognit 63:511-517
Guo, Yanrong; Dong, Pei; Hao, Shijie et al. (2016) Automatic Segmentation of Hippocampus for Longitudinal Infant Brain MR Image Sequence by Spatial-Temporal Hypergraph Learning. Patch Based Tech Med Imaging (2016) 9993:1-8
Wu, Guorong; Peng, Xuewei; Ying, Shihui et al. (2016) eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration. PLoS One 11:e0146870
Wu, Guorong; Kim, Minjeong; Wang, Qian et al. (2016) Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning. IEEE Trans Biomed Eng 63:1505-16
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong et al. (2015) A transversal approach for patch-based label fusion via matrix completion. Med Image Anal 24:135-148
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard et al. (2015) Hierarchical multi-atlas label fusion with multi-scale feature representation and label-specific patch partition. Neuroimage 106:34-46
Kim, Minjeong; Wu, Guorong; Guo, Yanrong et al. (2015) Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models. Proc IEEE Int Symp Biomed Imaging 2015:1560-1563
Kim, Minjeong; Wu, Guorong; Wang, Qian et al. (2015) Improved image registration by sparse patch-based deformation estimation. Neuroimage 105:257-68

Showing the most recent 10 out of 20 publications