? ? The proposed publication will collect and synthesize methods for quantifying systematic errors that affect non-randomized epidemiologic research. Conventional confidence intervals perform poorly as measures of total uncertainty in non-randomized epidemiologic research because they account for only random error, yet the majority of the error in these studies arises from systematic sources. Epidemiologic studies yield effect estimates (such as the risk ratio) and their results have an impact on all aspects of health interventions. The error accompanying these effect estimates is defined as their difference from the true effects, and can be divided into random error and systematic error. The random error approaches zero as the study size increases but systematic error does not. Precision quantifies the amount of random error in an effect estimate, and is usually represented by a confidence interval. Validity measures the amount of systematic error, but is seldom quantified. A quantitative assessment of systematic error associated with an effect estimate can be made by sensitivity analysis. ? ? The primary audience of the proposed publication comprises all authors, analysts, and consumers of non-randomized epidemiologic research. The publication will guide readers who are planning for sensitivity analyses and will illustrate methods of sensitivity analysis that can be applied to data sets as they conduct analyses and summarize results for publication. The publication will have a text component and a software component. The text will be published as a conventional book and electronically at the National Center for Biotechnology Information on-line bookshelf. The software will be freely available on the BUSPH internet. The software files will be Microsoft Excel files and SAS macrocode that illustrate the methods and allow the reader to implement sensitivity analysis methods with their data. This publication will enhance the ability of investigators to assess systematic error in effect estimates. ? ? ?