This project aims to enhance understanding of labor supply and program participation decisions by: 1) developing and estimating a behavioral model of labor supply using survey and administrative data from a randomized welfare experiment, 2) assessing the importance of precise measurement of agents' choices and incentives on the accuracy of the model's predictions, and 3) evaluating the ability of the estimated model to predict the results of other experiments in different populations.

The intellectual merit of this research is to generate accurate quantitative models of labor supply and program participation. Such models can be used to evaluate the impact of reforms to the social insurance and tax system and to make quantitative statements about the welfare effects of proposed and existing policies. Our estimation approach combines observational and experimental variation in incentives and opportunities. The experimental variation in welfare rules allows us to relax many of the key assumptions typically employed when estimating behavioral responses to changes in program rules. Our ability (or inability) to accurately predict the results of markedly different randomized experiments provides a policy-relevant metric for evaluating our modeling framework in general and, more specifically, our ability to account for self-selection into program and labor market participation.

The project will deliver two research papers. The first paper focuses on assessing the extent to which the ability of labor supply models to match experimental impacts depends upon measurement issues and modeling complexity. We will begin by developing and estimating a model of welfare participation and labor supply using data from the California Work Pays Demonstration Project (CWPDP) -- a large scale randomized welfare reform experiment implemented in California in the early 1990s. The longitudinally merged administrative and survey data available from this experiment allow us to measure the budgets (and hence incentives) of agents in substantially more detail than previous studies. Moreover, the various components of the data contain independent repeated measurements on a number of key variables allowing us to detect and model recording and reporting problems in a more satisfactory manner than has previously been attempted in this literature.

We will then examine how our estimates change when we coarsen the choices available to agents, use approximate rather than exact policy rules, or ignore measurement problems. Our focus will be on how these changes influence our ability to match experimental impacts on quantities of direct policy interest such as total welfare payments and program participation. The analysis will also provide a quantitative unbundling of the incentive effects associated with the CWPDP treatment, allowing us to ascertain, for example, the relative importance of changes in welfare eligibility rules vs. changes in earnings disregards.

The second paper is motivated by the concern that randomized experiments may have little external validity. The CWPDP experiment, for instance, was conducted on a sample of on-going welfare recipients residing in four California counties during a sustained boom in the state job market. To what extent can what we learn from the California experiment be generalized to other populations, time periods, and program mixes? The many state welfare experiments conducted during the 1990s provide us with the opportunity to answer this question. We will use our estimates from the CWPDP sample to generate predictions about the results of randomized experiments in two other states. This will entail developing methods to re-estimate distributions of unobservable preferences and skills from the control observations available in each state's sample. We will conclude with an assessment of the practical advantages (if any) of access to experimental variation in generating credible out of sample policy predictions.

The broader impact of our project will be to develop methods for enhancing what can be learned from social experiments through use of a behavioral model. We seek to illustrate how to build and estimate economic models capable of "pooling together" and interpreting the results of multiple experiments in the presence of unobserved individual-level heterogeneity. We believe such methods will become increasingly important as social and field experiments continue to proliferate in economics.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Application #
0962352
Program Officer
Georgia Kosmopoulou
Project Start
Project End
Budget Start
2010-06-01
Budget End
2014-05-31
Support Year
Fiscal Year
2009
Total Cost
$434,700
Indirect Cost
Name
National Bureau of Economic Research Inc
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02138