This R03 small grant mechanism, submitted by a new NIH Principal Investigator, will address important gaps in mediation analysis of substance use intervention data. Behavioral theories guiding program development have become increasingly complex, resulting in interventions targeting multiple mechanisms, or mediators that are believed to be causally related to use. Mediation analysis has been instrumental in exploring these pathways to substance use and the role of interventions in altering these developmental trajectories. Hypotheses about mediation have also increased in complexity and now often include questions about how mediated effects compare to one another. These contrasts of mediated effects are an increasingly important aspect of program evaluation, as they allow researchers to understand the comparative effectiveness of several treatments or treatment components. Comparisons of mediated effects can help researchers more efficiently tailor intervention programs to the most salient mechanisms of behavior change. Such information is critical for developing cost-effective interventions in the face of decreasing availability of resources. Unfortunately, in practice these comparisons have been based on invalid metrics such as the absolute size of the effects in question or their t scores. Such reliance on improper methods can obscure the processes that truly account for the majority of behavior change. Similarly, crude comparisons of group-specific mediated effects may suggest that one group would not benefit from a component, a conclusion that could be supported or refuted by more stringent statistical examination. This study will provide applied substance use researchers with the tools needed to properly conduct comparisons of mediated effects by addressing two specific aims. First is an extensive simulation study that will examine the statistical properties of all five known contrast tests (Wald, percentile and bias-corrected bootstrap, likelihood-based confidence intervals, and a dummy latent variable test). The statistical properties (e.g., power, Type I error rate) of these tests are either unknown or unclear, so this information will provide guidance about the situations (e.g., types of hypotheses, data structures) in which these various tests should be employed. The simulation results will then inform the second component of this study: a thorough application of contrast methods to existing prevention data from two multicomponent substance use prevention programs. Both programs are representative of many prevention designs, with features such as multiple mediators and outcomes and longitudinal self-reported data collection. These features, together with previous findings of significant mediation in both data sets, make them well suited to a pedagogical demonstration of how to formulate and test mediated effect contrasts.
This study will increase the tools available to public health researchers as they attempt to uncover and change the mechanisms by which substance use occurs. Comparisons of these mechanisms, also called mediated effects, are important for determining how treatment and prevention programs achieve their effects and how program efficacy differs across groups based on factors such as gender and age. The results of this study will identify the best methods for making these comparisons and provide concrete pedagogical examples that applied substance use researchers can use to benefit their own evaluations.