In light of increasing accountability pressures, the PI proposes to develop, describe and test statistical methods for reducing external validity bias in random assignment evaluations of STEM programs carried out in a non-representative samples of sites. A series of simulations will be conducted using real-world information to test the conditions under which different statistical methods can reduce the external validity bias.
The methodological approach examines real world data from four kinds of studies: regression with interactions; Bayesian additive regression trees (BART); inverse probablility of selection weighting (IPSW); and Subclassification.
The use of real world data in the proposed simulation studies will test the efficacy of each in reducing such bias. Four products will result from the work that will be highly useful to STEM evaluators and researchers: (1) a better understanding of the types of schools and districts that participate in impact evaluation; (2) analysis methods that researchers can use to improve the external validity of existing evaluations; (3) guidance on types of data to collect in future evaluations; and a data set and framework researchers can use to conduct simulation studies to investigate other methodological questions.