There is a fundamental lack of knowledge regarding the ability of approaches that assess the strength of evidence (SOE) to identify replicability of treatment effects in systematic reviews and meta-analyses affects the validity of approaches that assess the strength of evidence (SOE). Continued existence of this gap represents an important barrier to the efficient use of systematic reviews because, until we have it, it will be extremely difficult for stakeholders to confidently apply synthesized evidence in clinical practice and decision making. The long-term goal of our research is to develop and apply methodologies that improve trustworthiness of evidence syntheses and allow patients, clinicians, and policy makers make optimal decisions. The overall objective of this application, which is the next step towards the pursuit of that goal, is to determine the role that evidence replicability in assessing the trustworthiness of a body of evidence. Our central hypothesis is that replicability of the evidence, i.e. whether the treatment effect is replicated in more than one study in a meta-analysis, is currently undetected by existing meta-analytical methods, because they do not adequately control for the type I error, and as a result existing evidence trustworthiness is inappropriately graded by the current SOE framework. The rationale for the proposed work is that, once we have shown the role of quantifying replicability in systematic reviews, replicability analyses are likely to improve inferences on evidence trustworthiness based of the SOE grades, resulting in better clinical and policy decisions. We plan to test our central hypothesis and, thereby accomplish our overall objective, by pursuing the following two specific aims: 1) Determine the extent and attributes of replicable evidence in meta-analyses of clinical trials; and 2) Determine whether current SOE grading systems accurately identify replicable evidence. Our approach involves measuring replicability using the be statistically tested using the Benjamini-Heller partial conjunction (BHPC) hypothesis test in the most recent version of the Cochrane Database of Systematic Reviews (CDSR) and in evidence reports for the Evidence-based Practice Centers (EPC) Program on the Agency for Healthcare Research & Quality (AHRQ). Under the first aim, we will quantify replicability in meta-analyses across the entire healthcare domain and describe the characteristics of meta-analyses with lack of replicable evidence.
Under aim 2, we will determine the validity of current SOE grades to identify replicable, and hence true, treatment effects. The proposed research is innovative, in our opinion, because it represents a substantive departure from the status quo by determining whether SOE can accurately identify evidence that is immune to type I error. New research horizons are expected to be attainable as a result. The proposed research is significant, because it will determine if the current SOE have limitations regarding their ability to identify replicable evidence. Ultimately, such knowledge has the potential to inform the development and implementation of novel approaches to evaluate the SOE.
Replicability of treatment effects, i.e. observing the same or a similar effect in more than one trial, protects patients, clinicians, and policy makers from claiming conclusive evidence solely based on the results of a single study, which may be a false positive due to chance or bias. In this work, we will use quantify the degree of replicability in meta-analyses across the entire healthcare domain, and determine whether current tools to assess the quality and strength of evidence adequately detect replicable evidence. Our results will improve the interpretation of evidence synthesis findings by systematic reviewers and end-users (e.g. patients, health professionals, policy makers, advocacy groups) and help them make optimal treatment decisions.