Second-generation sequencing (sec-gen) technology is poised to radically change how genomic data is obtained and used. Capable of sequencing millions of short strands of DNA in parallel, this technology can be used to assemble complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to sequence the genomes of approximately 1,200 people. The possibility of comparative analysis at the sequence level of a large number of samples across multiple populations may be achievable within the next five years. These datasets also present unprecedented challenges in statistical analysis and data management. For example, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Furthermore, sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequencing reads is of utmost importance. Properly relating this uncertainty to the true underlying variation in the genome, especially, variation between and among populations will be essential for projects that use sec-gen sequencing data to meet their scientific goals. Although genome sequencing is the application that most attention has received, sec-gen technology is also being used to produce quantitative measurements related to applications previously associated with microarrays. Of these, chromatin immunoprecipitation followed by sequencing (ChIP- Seq) has been the most successful. Existing tools have been developed for analyzing one sample at a time. Methodology for drawing inference from multiple samples has not yet been developed. The demand for such methods will increase rapidly as the technology becomes more economical and multiple samples become standard. Other applications for which statistical methodology is needed are RNA and microRNA transcription analysis. In all these sequencing applications, a number of critical steps are required to convert raw intensity measures into the sequence reads that will be used in down-stream analysis. Ad-hoc approaches, that assign weights to each base call, are unsuitable. Our goal is to create a sound and unified statistical and computational methodology for representing and managing uncertainty throughout the sec-gen sequencing data analysis pipeline built on a robust, modular and extensible software platform.

Public Health Relevance

Second-generation sequencing technology is poised to radically change how genomic data isobtained and used. These datasets also present unprecedented challenges in statisticalanalysis and modeling and quantifying uncertainty inherent in the generation of sequencingreads is of utmost importance. We will develop data analysis tools for widely used applicationsusing statistical methods that account for this uncertainty.

Agency
National Institute of Health (NIH)
Institute
National Human Genome Research Institute (NHGRI)
Type
Research Project (R01)
Project #
7R01HG005220-04
Application #
8806870
Study Section
Genomics, Computational Biology and Technology Study Section (GCAT)
Program Officer
Brooks, Lisa
Project Start
2010-08-11
Project End
2015-05-31
Budget Start
2013-06-01
Budget End
2015-05-31
Support Year
4
Fiscal Year
2012
Total Cost
$83,810
Indirect Cost
$17,683
Name
Dana-Farber Cancer Institute
Department
Type
DUNS #
076580745
City
Boston
State
MA
Country
United States
Zip Code
02215
Patro, Rob; Duggal, Geet; Love, Michael I et al. (2017) Salmon provides fast and bias-aware quantification of transcript expression. Nat Methods 14:417-419
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A et al. (2017) Experimental design and quantitative analysis of microbial community multiomics. Genome Biol 18:228
Sinha, Rashmi; Abu-Ali, Galeb; Vogtmann, Emily et al. (2017) Assessment of variation in microbial community amplicon sequencing by the Microbiome Quality Control (MBQC) project consortium. Nat Biotechnol 35:1077-1086
Teng, Mingxiang; Irizarry, Rafael A (2017) Accounting for GC-content bias reduces systematic errors and batch effects in ChIP-seq data. Genome Res 27:1930-1938
Wagner, Justin; Paulson, Joseph N; Wang, Xiao et al. (2016) Privacy-preserving microbiome analysis using secure computation. Bioinformatics 32:1873-9
Love, Michael I; Hogenesch, John B; Irizarry, Rafael A (2016) Modeling of RNA-seq fragment sequence bias reduces systematic errors in transcript abundance estimation. Nat Biotechnol 34:1287-1291
Dorri, Faezeh; Mendelowitz, Lee; Corrada Bravo, Héctor (2016) methylFlow: cell-specific methylation pattern reconstruction from high-throughput bisulfite-converted DNA sequencing. Bioinformatics 32:1618-24
Sharmin, Mahfuza; Bravo, Héctor Corrada; Hannenhalli, Sridhar (2016) Distinct genomic and epigenomic features demarcate hypomethylated blocks in colon cancer. BMC Cancer 16:88
Sharmin, Mahfuza; Bravo, Héctor Corrada; Hannenhalli, Sridhar (2016) Heterogeneity of transcription factor binding specificity models within and across cell lines. Genome Res 26:1110-23
Dinalankara, Wikum; Bravo, Héctor Corrada (2015) Gene Expression Signatures Based on Variability can Robustly Predict Tumor Progression and Prognosis. Cancer Inform 14:71-81

Showing the most recent 10 out of 45 publications