Abstract

Over the past decade, a substantial literature on methods for the estimation of discrete choice dynamic programming (DDP) models of behavior has developed. However, the implementation of these methods can impose major computational burdens because solving for agents' decision rules often involves high dimensional integrations that must be performed at each point in the state space. In this paper we develop an approximate solution method that consists of: (1) using Monte Carlo integration to stimulate the required multiple integrals at a subset of the state points, and (2) interpolating the non-simulated values using a regression function. The overall performance of this approximation method appears to be excellent. Copyright 1994 by MIT Press.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call