This research project will develop new strategies for investigating causal mechanisms in multi-site experimental and quasi-experimental evaluations of intervention programs. Past research in a wide range of fields often has reported considerable cross-site heterogeneity in the average program effect. However, program evaluators have not been able to take full advantage of multi-site data. The analytic strategies to be developed in this project will be suitable for testing scientific theories about the intermediate process of an intervention and for assessing whether and how the program mechanism operates differently across settings. The new strategies will allow applied researchers to flexibly ask a new set of empirical questions crucial for testing the generalizability of an intervention theory across settings and for unpacking complex mediation mechanisms, making it possible to revisit the intervention theory and to suggest specific site-level modifications of the intervention practice. This project is motivated by application examples, including the National Job Corps Study, which evaluates a program designed to promote economic independence of disadvantaged youths; the Head Start Impact Study, which evaluates the federal early childhood education program for children living in poverty; and math curricular reforms in Chicago Public Schools aimed at improving the math learning of low-achieving ninth graders. The research team will provide open-source R packages and Stata ado files along with user manuals and pedagogical data examples for dissemination.

This research is built on the tradition of mediation analysis in the social sciences and on the recent developments in statistical theories and methods for causal mediation analysis. The existing methods, in general, require the analyst to specify both mediator models and outcome models involving comparatively strong model-based assumptions. Assessing between-site heterogeneity in causal mechanisms is challenging especially when many of the model-based assumptions are impractical. The research team has made initial attempts to extend a propensity score-based weighting method to multisite analysis of simple mediation mechanisms. This weighting method does not require outcome model specifications. The latest analytic procedure, however, is limited to experiments with a relatively large sample size per site. To overcome the existing challenges, this project will advance methodological developments in causal parameter estimation, propensity score estimation, and sensitivity analysis. The investigators will develop a novel pseudo-outcome random-effects strategy for causal parameter estimation analyzed through method-of-moments to avoid model-based assumptions. By pooling data from all the sites, the analysis will be unconstrained by a small sample size per site and will be flexible for investigating complex mediation mechanisms. Asymptotic variances will be obtained that take into account uncertainty in propensity score estimation. For enhancing robustness to propensity score model misspecification, a covariate-balancing propensity score estimation approach will be incorporated. The investigators also will develop a new weighting-based approach to sensitivity analysis for omitted post-treatment as well as pre-treatment confounders. These new strategies will be extended from simple to complex mechanisms and from experimental to quasi-experimental multi-site data.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Application #
1659935
Program Officer
Cheryl Eavey
Project Start
Project End
Budget Start
2017-07-01
Budget End
2021-06-30
Support Year
Fiscal Year
2016
Total Cost
$410,000
Indirect Cost
Name
University of Chicago
Department
Type
DUNS #
City
Chicago
State
IL
Country
United States
Zip Code
60637