Second-generation sequencing (sec-gen) technology is poised to radically change how genomic data is obtained and used. Capable of sequencing millions of short strands of DNA in parallel, this technology can be used to assemble complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to sequence the genomes of approximately 1,200 people. The possibility of comparative analysis at the sequence level of a large number of samples across multiple populations may be achievable within the next five years. These datasets also present unprecedented challenges in statistical analysis and data management. For example, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Furthermore, sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequencing reads is of utmost importance. Properly relating this uncertainty to the true underlying variation in the genome, especially, variation between and among populations will be essential for projects that use sec-gen sequencing data to meet their scientific goals. Although genome sequencing is the application that most attention has received, sec-gen technology is also being used to produce quantitative measurements related to applications previously associated with microarrays. Of these, chromatin immunoprecipitation followed by sequencing (ChIP- Seq) has been the most successful. Existing tools have been developed for analyzing one sample at a time. Methodology for drawing inference from multiple samples has not yet been developed. The demand for such methods will increase rapidly as the technology becomes more economical and multiple samples become standard. Other applications for which statistical methodology is needed are RNA and microRNA transcription analysis. In all these sequencing applications, a number of critical steps are required to convert raw intensity measures into the sequence reads that will be used in down-stream analysis. Ad-hoc approaches, that assign weights to each base call, are unsuitable. Our goal is to create a sound and unified statistical and computational methodology for representing and managing uncertainty throughout the sec-gen sequencing data analysis pipeline built on a robust, modular and extensible software platform.
Second-generation sequencing technology is poised to radically change how genomic data isobtained and used. These datasets also present unprecedented challenges in statisticalanalysis and modeling and quantifying uncertainty inherent in the generation of sequencingreads is of utmost importance. We will develop data analysis tools for widely used applicationsusing statistical methods that account for this uncertainty.
Showing the most recent 10 out of 53 publications