The data generated by the ENCODE Consortium constitutes an unprecedented opportunity to make biomedical inferences about the function and structure of the human genome. With nearly 1000 genome-wide assays, the depth of information now publicly available about each base-pair is staggering, and in the next round of consortium work this depth is likely to increase geometrically. In this proposal, we describe statistical challenges that, if met, will substantially enhance the capacity of the Analysis Workin Group (AWG) to make functional biological inferences from ENCODE data. We will tackle these challenges, serving as a statistical "Research and Development" component with built-in experimental validation capabilities, and will provide the AWG with useful software implementations of the statistical tools we develop, as well with iterative refinements of these tools grounded in experimental validations of our imputed networks. In particular, we will: 1) develop methods of dimension reduction that will aid the AWG in data visualization, summarization, and prediction studies (e.g. the prediction of transcription from chromatin data);2) develop new quantitative network models of complex biological systems assayed by ENCODE;and 3) conduct targeted biological validation assays designed to interrogate important low-dimensional structures in our models and to feed back to improve both model structure and performance. Our approaches to dimension reduction will aid biologists in interpreting and formulating hypotheses from high-dimensional genomics data, our network models will facilitate the construction of interpretable predictive algorithms that lead directly t testable and quantifiable hypotheses, and our validation assays will ensure that inferences derived from our tools provide meaningful biological insights. As we did as part of the ENCODE and modENCODE data analysis centers, we will work closely with the AWG to ensure that our software implementations are immediately and maximally useful to the consortium, and that the overall course of our work on network inference and dimension reduction is focused around biological questions central to the interests of the consortium.
The ENCODE Consortium has generated, and will continue to generate thousands of data sets that each provide information about the biochemical activity of every base in the human genome. The scope of this data is now so vast that no one researcher can hope to develop a coherent understanding of more than a very small fraction of the total information. Our aim is to provide computational tools that identify important structures and correlations in the data that can be understood and interpreted by researchers;that is, to enable scientists to understand and derive insight from genome-scale biology.
|Wang, Y X Rachel; Huang, Haiyan (2014) Review on statistical methods for gene network reconstruction using expression data. J Theor Biol 362:53-61|
|Gerstein, Mark B; Rozowsky, Joel; Yan, Koon-Kiu et al. (2014) Comparative analysis of the transcriptome across distant species. Nature 512:445-8|
|Alam, Tanvir; Medvedeva, Yulia A; Jia, Hui et al. (2014) Promoter analysis reveals globally differential regulation of human long non-coding RNA and protein-coding genes. PLoS One 9:e109443|
|Li, Jingyi Jessica; Huang, Haiyan; Bickel, Peter J et al. (2014) Comparison of D. melanogaster and C. elegans developmental stages, tissues, and cells by modENCODE RNA-seq data. Genome Res 24:1086-101|
|Boley, Nathan; Wan, Kenneth H; Bickel, Peter J et al. (2014) Navigating and mining modENCODE data. Methods 68:38-47|
|Brown, James B; Boley, Nathan; Eisman, Robert et al. (2014) Diversity and dynamics of the Drosophila transcriptome. Nature 512:393-9|
|Boley, Nathan; Stoiber, Marcus H; Booth, Benjamin W et al. (2014) Genome-guided transcript assembly by integrative analysis of RNA sequence data. Nat Biotechnol 32:341-6|