Diagnostic testing, operatively defined as the use of information from a test or procedure that can improve the chances of correct diagnosis of a disease or disease severity relative to the information provided by patient history or clinical intuition, occupies an increasing share of the focus and costs of healthcare. As the rate of improvements in medical technology accelerates, studies of the diagnostic capability of new procedures continue to proliferate. Because the number of clinical trials to test the impact of new diagnostic technology is far less than the number to evaluate new medical treatments, most diagnostic test studies examine the accuracy of the new technology relative to a gold standard. Inevitably, procedures are tested in multiple studies and conflicting results may arise. Because of the observational nature of the studies and the frequent lack of standards for test interpretation, subjective judgement may introduce heterogeneity between studies. While heterogeneity has received increasing emphasis in the meta-analysis of treatment efficacy studies, it has been inadequately considered in meta-analyses of diagnostic tests. These continue to rely on measures that explain only certain aspects of test performance and fixed effects models that imply uniform performance across different types of patients in different types of settings with different amounts of operational expertise. Moreover, many of the analytic methods that are most frequently used apply only to special types of study data summaries, particularly reports of single sensitivity and specificity values, although such summaries may be simplifications of the original conclusions. It is unclear whether this usage arises from a need to standardize inconsistently reported measures of performance, from a failure to appreciate the need for more sophisticated analysis or from study protocols that simplify test reporting for analytic convenience. The analytic techniques also rely on large sample sizes for their validity; performance with small sample sizes has not been evaluated. As a result, the applicant proposes to undertake a study of the diagnostic test literature to assess the quality and adequacy of the meta-analyses being performed to evaluate test accuracy, to examine the impact of heterogeneity on conclusions and to determine the best methods for analyzing such data. The applicant will first evaluate and compare different models for analyzing test accuracy data in published diagnostic test meta-analyses by: 1) abstracting outcome and covariate information from all meta-analyses of diagnostic tests listed in Medline as published since 1990; 2) developing and extending Bayesian multilevel random effects regression models and developing and modifying software to implement new and existing models for analyzing diagnostic test data; and 3) applying and comparing models developed on the collected meta-analyses. Next investigators will evaluate how well the meta-analyses summarize the information in the diagnostic test studies they comprise by: 4) collecting all studies included in 30 of the meta-analyses; 5) comparing the outcomes and covariates extracted from the studies with those reported in the 30 meta-analyses; and 6) updating the 30 meta-analyses using information collected from the studies and determining any changes in conclusions. These tasks will contribute to the development of recommendations for improving diagnostic test meta-analyses.

Agency
National Institute of Health (NIH)
Institute
Agency for Healthcare Research and Quality (AHRQ)
Type
Research Project (R01)
Project #
1R01HS013328-01
Application #
6559484
Study Section
Health Care Technology and Decision Science (HTDS)
Program Officer
Berliner, Elise
Project Start
2002-09-16
Project End
2005-08-31
Budget Start
2002-09-16
Budget End
2003-08-31
Support Year
1
Fiscal Year
2002
Total Cost
Indirect Cost
Name
Tufts University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02111
Dahabreh, Issa J; Chung, Mei; Kitsios, Georgios D et al. (2013) Survey of the methods and reporting practices in published meta-analyses of test performance: 1987 to 2009. Res Synth Methods 4:242-55