Gene expression provides a snapshot of the cellular changes that promote tumor malignancy. Quantitative gene expression analysis, especially as implemented by DNA microarrays, has identified many new important cancer related genes and led to the development of new genomic-based clinical tests. For the quantitative aspect of gene expression analysis, many statistical methods have been used to study human tumors and to classify them into groups that can be used to predict clinical behavior. Despite progress, with the rapid advance of technology, massive and complex data are being generated in cancer research. Analyzing such data becomes more and more challenging. These challenges call for novel statistical learning methods, especially for high dimensional and noisy data. The goal of this project is to develop a host of new statistical learning techniques for solving complicated learning problems. In particular, this project develops (1) novel techniques to assess statistical significance of clustering for high dimensional data;(2) several novel predictive models including classification and regression which are expected to yield highly competitive accuracy and interpretability;(3) new methods for high dimensional biomarker/variable selection;(4) new approaches to estimate high dimensional covariance/precision matrix for biological network construction. These new developments are expected to allow scientists to analyze complex cancer genomic data with accurate prediction accuracy and increased interpretability. The research team will apply the proposed techniques to cancer research data analysis. The success of this project will be important in bridging statistical machine learning and cancer research.
This project aims to develop a host of new statistical learning techniques for solving complicated learning problems, especially for problems with high dimensional and noisy data such as gene expression data. These new techniques are expected to allow scientists to analyze complex cancer genomic data with accurate prediction accuracy and increased interpretability.
|Zhang, Chong; Liu, Yufeng (2016) Comments on: Probability Enhanced Effective Dimension Reduction for Classifying Sparse Functional Data. Test (Madr) 25:44-46|
|Chen, Guanhua; Liu, Yufeng; Shen, Dinggang et al. (2016) Composite large margin classifiers with latent subclasses for heterogeneous biomedical data. Stat Anal Data Min 9:75-88|
|Zhang, Chong; Liu, Yufeng; Wu, Yichao (2016) On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint. J Mach Learn Res 17:1-45|
|Zhang, Xiang; Wu, Yichao; Wang, Lan et al. (2016) Variable Selection for Support Vector Machines in Moderately High Dimensions. J R Stat Soc Series B Stat Methodol 78:53-76|
|Shin, Sunyoung; Fine, Jason; Liu, Yufeng (2016) Adaptive Estimation with Partially Overlapping Models. Stat Sin 26:235-253|
|Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao (2016) LOCAL INDEPENDENCE FEATURE SCREENING FOR NONPARAMETRIC AND SEMIPARAMETRIC MODELS BY MARGINAL EMPIRICAL LIKELIHOOD. Ann Stat 44:515-539|
|Zhang, Chong; Liu, Yufeng; Wang, Junhui et al. (2016) Reinforced Angle-based Multicategory Support Vector Machines. J Comput Graph Stat 25:806-825|
|Zhang, Xiang; Wu, Yichao; Wang, Lan et al. (2016) A Consistent Information Criterion for Support Vector Machines in Diverging Model Spaces. J Mach Learn Res 17:1-26|
|Hu, Hao; Wu, Yichao; Yao, Weixin (2016) Maximum likelihood estimation of the mixture of log-concave densities. Comput Stat Data Anal 101:137-147|
|Wu, Yichao; Stefanski, Leonard A (2015) Automatic structure recovery for additive models. Biometrika 102:381-395|
Showing the most recent 10 out of 60 publications