Comparative effectiveness research (CER) relies fundamentally on accurate and timely assessment of the benefits and risks of different treatment options. Empirical evidence suggests that a median of 35% of efficacy and 50% of safety outcomes per parallel group trials were incompletely reported, and statistically significant outcomes had a higher likelihood of being fully reported compared to non-significant outcomes, both for efficacy and safety. Such a bias is referred to as outcome reporting bias (ORB), i.e., ?the selective reporting of some outcomes but not others, depending on the nature and direction of the results (i.e., missing certain outcomes).? Selective reporting can invalidate results from meta-analyses. As acknowledged in the Cochrane handbook ?Statistical methods to detect within-study selective reporting (i.e., outcome-reporting bias) are, as yet, not well developed? (chapter 8.14.2, version 5.0.2), there is a critical need to develop methods specifically accounting for ORB. In response to PA-16-160, the overall goal of this proposal is to develop, test and evaluate new statistical methods and user-friendly software to account for ORB in multivariate and network meta-analyses. In this proposal, we will focus on: (1) To propose and evaluate new methods for quantifying the evidence of ORB, to adjusting for ORB, and to develop a procedure of sensitivity analysis under ORB in multivariate meta-analysis. (2) To generalize the methods in Aim 1 to network meta-analyses (where more than 2 treatments are compared simultaneously), and to propose methods to evaluate the evidence consistency. And (3) To develop publicly available, user-friendly and well-documented software and apply the proposed methods to research data sets. We will use carefully designed simulation studies to investigate the performance of the proposed methods, apply the proposed methods to multiple existing databases, and develop statistical software for wider research communities. We propose to perform empirical assessment of the strengths and weaknesses of these methods through carefully designed simulation studies and, more importantly, applications to (network) meta-analyses of clinical trials with multivariate outcomes. Completion of these three aims in this proposal will directly benefit the CER program by providing state-of-the art methods implemented in user-friendly R package that will be made freely available to the public. This has the potential to catalyze the development of many new methods, amplifying the impact of our project.

Public Health Relevance

Systematic reviews of randomized controlled trials are key in the practice of evidence based medicine. Proper conducting meta-analysis is critically important to ensure the quality of systematic reviews. However, it has been increasingly recognized that selective reporting of outcomes is highly prevalent in randomized controlled trials, which can invalidate results from meta-analysis. This issue has been understudied in the literature. In this application, we aim to tackle this important and much-needed research area by proposing a general framework to quantify, adjust and evaluate (through sensitivity analysis) the impact of outcome reporting bias in both multiple outcome meta-analysis and multiple outcome network meta-analysis.

Agency
National Institute of Health (NIH)
Institute
National Library of Medicine (NLM)
Type
Research Project (R01)
Project #
5R01LM012607-04
Application #
9999033
Study Section
Biomedical Library and Informatics Review Committee (BLR)
Program Officer
Sim, Hua-Chuan
Project Start
2017-09-08
Project End
2021-08-31
Budget Start
2020-09-01
Budget End
2021-08-31
Support Year
4
Fiscal Year
2020
Total Cost
Indirect Cost
Name
University of Pennsylvania
Department
Biostatistics & Other Math Sci
Type
Schools of Medicine
DUNS #
042250712
City
Philadelphia
State
PA
Country
United States
Zip Code
19104