New diagnostic tests are developed quickly, and existing diagnostic tests are often rapidly improved after being introduced into practice. Unfortunately, inaccurate and biased evaluations of a test's statistical properties, often the result of a poorly designed or poorly analyzed study, leads to their premature dissemination and to physicians using unreliable tests to make critical treatment decisions. Perhaps the most common cause for the misevaluation of diagnostic tests is verification bias. Verification bias occurs when the verification of a patient's disease status depends on the result of the proposed test or certain patient characteristics associated with disease status. Statistical methods that correct for verification bias are underdeveloped and seldom used, and this application proposes a novel statistical strategy for addressing verification bias that is generalizable and accessible to non- statisticians (with appropriate software package). Even when only a select subset of low-risk negative-screening patients can undergo invasive or costly disease verification, the proposed method will still yield a valid (and cost-efficient) strategy for evaluating the statistical properties of the diagnostic test under consideration. Specifically, this application addresses the following four problems.
(Aim 1 :) The development of a novel doubly robust estimator for sensitivity, specificity, and positive and negative predictive values that can be used in the presence of verification bias. The estimators are doubly robust in the sense that the actual estimate is correct (i.e., consistent) in moderately large samples if either the model for true disease status or the model for verification status (but not necessarily both) is correct.
(Aim 2 :) To extend the methods developed in Aim 1 to tests and biomarkers that yield continuous or ordinal outcomes and where the area under a receiver operator characteristic curve is used to measure diagnostic accuracy.
(Aim 3 :) We 'reverse'our approach to develop a model for predicting disease status, from patient's characteristic and diagnosis, in the presence of verification bias.
(Aim 4 :) To develop and freely distribute an assessable a software package that will implement these methods for statisticians and clinical researchers alike. Finally, the clinical implications of this proposed research are wide-ranging as much of medicine is diagnostic in nature. These methods have great potential to improve the statistical evaluation of diagnostic tests, which will in turn yield significant improvement in the ability of our physicians to make accurate diagnoses.
Screening tests for disease rely on commonly accepted measures that represent each test's """"""""gold standard"""""""" to diagnose true disease status. The gold standard test may, however, be too expensive or too invasive to consider implementing for every subject in a study. Verification bias may arise when the verification of the disease status depends on the result of the screening test. This application proposes to develop novel doubly robust estimators to evaluate the accuracy and efficiency of the screening tests in the presence of verification bias.
Showing the most recent 10 out of 13 publications