Recent changes in the health care system place a premium on measuring and monitoring health outcomes to quantify the risks and benefits of various medical services. The balance between efforts to lower costs, increase efficiency, and improve outcomes is a delicate one to maintain and one in which risk prediction models play an increasingly important role. With at least 36 states already implementing initiatives requiring health care institutions to report outcomes data, there is a growing trend to adopt risk prediction models to guide continuous quality improvement. Although risk-adjustment is recognized as a necessary ingredient in any formula for outcome assessment, there are no well-accepted methods to determine the validity of risk prediction models. To be useful, a risk prediction model must be both reliable and able to discriminate among patients likely to benefit. A number of validation methods for assessing these qualities of a model have been proposed but inadequately studied. This investigation will use several premier, high- quality datasets to test and develop methods to improve understanding of the foundation upon which much of outcomes research depends. Specifically, we propose to 1. Organize, classify, and evaluate existing methods to validate predictive models. 2. Develop software tools for those methods that appear to be most promising. 3. Develop new validation methods with accompanying software tools. 4. Use existing models and both internal and external datasets to apply, test, and compare the validation methods. 5. Create a reference document for use in validating new models. 6. Create a user-friendly software package for applying these validation methods. The methodology in this project will primarily be implemented in the area of cardiology. A number of published diagnostic and long-term survival models have been developed and tested within the Duke Databank for Cardiovascular Disease. Other published models for predicting risk- adjusted procedural mortality have been developed and applied in the IHD PORT. Using data from Duke, Minnesota, New York, Northern New England, HCFA, and Oklahoma we will develop validation strategies while concurrently applying them to these existing models and datasets. The results should be applicable to other health care areas.