The estimation of models of dynamic decision making is complicated by needing to calculate the future payoffs associated with different choice paths. These expected future payoffs from behaving optimally in the future are called the value function. Calculating a value function for a complicated dynamic game is computationally expensive because of the large number of possible choice paths. This award funds research to develop a new method for approximating value functions. The approximation is based on sieve methods, and it has good asymptotic properties as the complexity of the sieve increases. These methods will be useful for models of individual decision making and also useful for dynamic games, where several individuals make decisions over time. The sieve method can be used to solve for the finite horizon Nash equilibrium; it can also be used to test whether this equilibrium is being played by individuals in a particular application. The research team will apply this method to analyzing data from a sample of sixth grade students; the dynamic game here is the establishment of friendships and how students over time form strong peer networks.

The new method will potentially advance empirical work in many areas of applied microeconomics. Because the method is computationally less expensive, it may be well suited for use by researchers who wish to estimate dynamic game models at a reasonable cost.

Project Report

Federal Award ID:1124193 This research project considers dynamic optimization problems in which economics agents (e.g. individuals, firms, etc.) make optimal decisions over a certain period of time. This framework is used to model decisions made by a single individual agent (i.e. single agent problem) or by a group of interrelated agents (i.e. an economic game). A key object of interest in this model is the value function, which represents the maximum value (i.e. utility or profit) that can be obtained by the economic agent if he behaves optimally. For example, when an individual deciding whether to attend college, the individual must more expectations about how his or her life would be different if he or she attended college. This expectation about the future is the value function. Our goal in this project was to propose a new methodology to approximate the value function known as sieve value function iteration (SVFI). In summary, the SVFI consists of using a non-parametric estimation procedure known as the method of sieves in order to conduct value function iteration. According to the method of sieves, the unknown function is approximated using a sequence of less complex, often finite-dimensional (i.e. parametric) functional spaces. Usual examples of sieve spaces include polynomial or trigonometric functions. The SVFI approach is quite straightforward to understand and implement and, thus, has the potential to be widely applied in economics and other disciplines. The framework can also be flexibly implemented. For example, the method can be implemented by focusing on a finite subset of states in the state space, which can be attractive for problems with large (even infinite) state spaces. We also show how this method can be applied equally well to infinite and finite horizon problems. Besides proposing the approximation method, we also analyze its formal properties as the complexity of the sieve space diverges to infinity. In particular, we establish results for the (a) consistency, (b) rates of convergence, and (c) bounds on the error of approximation. Furthermore, we show that this approximation method can be embedded into an estimation routine to provide consistent estimates of the dynamic model’s parameters. In order to organize the research project, we divided the analysis into the single and multiple agent problems. The analysis of SVFI in single agent problems has resulted in a publication in Advances in Econometrics titled "Approximating High-Dimensional Models: Sieve Value Function Iteration", joint work by Peter Arcidiacono, Patrick Bayer, Federico Bugni and Jonathan James. The analysis of SVFI in multiple agent problems is currently a work in progress titled "Sieve Value Function Iteration for Large State Space Dynamic Games" by the same authors. We are hopeful that this work in progress will be complete in the near future. The results in our research project have been disseminated through multiple seminar and conference presentations at Duke University, In addition, multiple graduate students (especially at Duke University) have learned to use our methods and are currently applying them in their dissertations.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
1124193
Program Officer
Nancy A. Lutz
Project Start
Project End
Budget Start
2011-09-15
Budget End
2014-08-31
Support Year
Fiscal Year
2011
Total Cost
$391,114
Indirect Cost
Name
Duke University
Department
Type
DUNS #
City
Durham
State
NC
Country
United States
Zip Code
27705