Cancer screening is used for the early detection of cancer. The goal in screening is to detect more cancers and avoid the pain and suffering of incorrect diagnoses. As new cancer screening tests are developed, they are compared to existing modalities. The most common statistical approach for comparing screening tests in paired trials is to examine the difference between the areas under the receiver operating characteristic (ROC) curves, a summary measure of both sensitivity and specificity. However, this approach may cause bias severe enough to lead to the wrong decision as to which screening test is better. The bias arises because some cancer is clinically occult, and is not observed during the trial period. This inflates the observed sensitivity of the screening tests. The bias is worsened if the sensitivity is inflated differentially for the two screening tests being compared. This occurs when a different fraction of participants with disease are referred to biopsy by each screening test. Because increasing the rate of biopsy or making other changes in the design is ethically impossible, new statistical methods are needed to correct for this bias. We propose new techniques to test for the presence of bias in paired screening trials, to provide unbiased estimates of sensitivity, specificity, and area under the curve, and to allow correct decisions as to which screening test has the best diagnostic accuracy. The utility of the methods will be demonstrated using data from the Lewin et al., 2002 trial, a comparison of the diagnostic accuracy of digital and film mammography. Publications, presentations and interactive software will make our findings accessible to physicians, epidemiologists, statisticians and other study designers. Unbiased clinical trials will identify the screening tests with the best diagnostic accuracy. In turn, more accurate screening tests will improve cancer detection, reduce false negative diagnoses, and ultimately reduce cancer morbidity and mortality.
Screening reduces cancer deaths, because survival is better when cancer is detected early and treated before it spreads. As new screening tests are developed, they are compared to existing screening tests to determine which test is better. Current methods for comparing two screening tests may be biased. The bias can cause researchers to draw the wrong conclusions about which screening test is better. Our research will help study designers test for this bias, and correct for it in their analysis. Unbiased study designs will enable researchers to choose the best screening tests for early cancer detection.
Ringham, Brandy M; Alonzo, Todd A; Brinton, John T et al. (2014) Reducing decision errors in the paired comparison of the diagnostic accuracy of screening tests with Gaussian outcomes. BMC Med Res Methodol 14:37 |
Alonzo, Todd A; Brinton, John T; Ringham, Brandy M et al. (2011) Bias in estimating accuracy of a binary screening test with differential disease verification. Stat Med 30:1852-64 |
Ringham, Brandy M; Alonzo, Todd A; Grunwald, Gary K et al. (2010) Estimates of sensitivity and specificity can be biased when reporting the results of the second test in a screening trial conducted in series. BMC Med Res Methodol 10:3 |