Comparative effectiveness research (CER) relies fundamentally on accurate and timely assessment of the benefits and risks of different treatment options. Empirical evidence suggests that a median of 35% of efficacy and 50% of safety outcomes per parallel group trials were incompletely reported, and statistically significant outcomes had a higher likelihood of being fully reported compared to non-significant outcomes, both for efficacy and safety. Such a bias is referred to as outcome reporting bias (ORB), i.e., ?the selective reporting of some outcomes but not others, depending on the nature and direction of the results (i.e., missing certain outcomes).? Selective reporting can invalidate results from meta-analyses. As acknowledged in the Cochrane handbook ?Statistical methods to detect within-study selective reporting (i.e., outcome-reporting bias) are, as yet, not well developed? (chapter 8.14.2, version 5.0.2), there is a critical need to develop methods specifically accounting for ORB. In response to PA-16-160, the overall goal of this proposal is to develop, test and evaluate new statistical methods and user-friendly software to account for ORB in multivariate and network meta-analyses. In this proposal, we will focus on: (1) To propose and evaluate new methods for quantifying the evidence of ORB, to adjusting for ORB, and to develop a procedure of sensitivity analysis under ORB in multivariate meta-analysis. (2) To generalize the methods in Aim 1 to network meta-analyses (where more than 2 treatments are compared simultaneously), and to propose methods to evaluate the evidence consistency. And (3) To develop publicly available, user-friendly and well-documented software and apply the proposed methods to research data sets. We will use carefully designed simulation studies to investigate the performance of the proposed methods, apply the proposed methods to multiple existing databases, and develop statistical software for wider research communities. We propose to perform empirical assessment of the strengths and weaknesses of these methods through carefully designed simulation studies and, more importantly, applications to (network) meta-analyses of clinical trials with multivariate outcomes. Completion of these three aims in this proposal will directly benefit the CER program by providing state-of-the art methods implemented in user-friendly R package that will be made freely available to the public. This has the potential to catalyze the development of many new methods, amplifying the impact of our project.

Public Health Relevance

Systematic reviews of randomized controlled trials are key in the practice of evidence based medicine. Proper conducting meta-analysis is critically important to ensure the quality of systematic reviews. However, it has been increasingly recognized that selective reporting of outcomes is highly prevalent in randomized controlled trials, which can invalidate results from meta-analysis. This issue has been understudied in the literature. In this application, we aim to tackle this important and much-needed research area by proposing a general framework to quantify, adjust and evaluate (through sensitivity analysis) the impact of outcome reporting bias in both multiple outcome meta-analysis and multiple outcome network meta-analysis.

National Institute of Health (NIH)
National Library of Medicine (NLM)
Research Project (R01)
Project #
Application #
Study Section
Biomedical Library and Informatics Review Committee (BLR)
Program Officer
Sim, Hua-Chuan
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Pennsylvania
Biostatistics & Other Math Sci
Schools of Medicine
United States
Zip Code