This proposal is designed to develop and evaluate a novel web-based model for computer assisted patient simulation that can be used for performance-based competency assessment and education. This new model will emulate patient care by the un-prompted, sequential decision making process of ordering, advance to the next encounter, and receiving and evaluating results;proceeding through several encounters, until the trainee decides to make a definitive diagnostic decision. As we develop this model, we will also evaluate whether a novel combined expert response/expert consensus scoring ajgorithm based on expert responses, augmented with evidence-based expert consensus building, conducted via web communication, provides better reliability and validity than expert response generated keys without the consensus component, or by global ratings. Reliability and validity of the simulation model and scoring key for competency assessment will be tested in the domain of clinical laboratory medicine. The simulation model and scoring key will be developed from a web-based simulation currently used in the pathology course at the University of Iowa (PathCAPS), which will be significantly upgraded in a Perl CGI scripted MySQL database. This newly developed simulation will assume the acronym LabCAPS. The evaluation research will be carried out in medical school courses at the University of Iowa an in laboratory medicine (clinical pathology) residency training programs at the University of Iowa Hospitals and Clinics and elsewhere. The educational goal of the simulation will be to foster evidence-based use of laboratory resources, and a reduction in unnecessary expenses. A long-term goal will be to develop a comprehensive web enabled dataset of LabCAPS simulations that can be used to reliably measure patient care competencies of medical trainees throughout the United States. With a successful outcome, the technology and scoring techniques developed by this grant can significantly improve the validity of testing in a number of areas in medical education. Because of validity concern with multiple-choice items, there has been widespread interest in using performance assessment methods. But performance assessments, as currently conceptualized, are extremely expensive to develop and score. This grant investigates methodologies designed to significantly reduce the cost of both development and scoring.