Abstract

The solution to dynamic portfolio choice models can be formulated in terms of a value function by the Bellman principle of optimality, which reduces the multi-period optimal policy choice problem to a sequence of one-period maximization problems. For two adjacent periods, economists compute the error of numerically obtained policies by measuring how much these policies violate the intertemporal first order conditions for the optimal policy choice problem---so-called Euler equation errors. In this paper, we derive generalized Euler equation errors for the solution to a broad class of discrete time dynamic portfolio choice models where the policies are continuous choice variables. Our key precondition is that the gradient of the value function with respect to the state variables can be approximated. This is, for example, the case when a global polynomial basis or B-spline basis functions are used for the approximation of the value function within the discrete time dynamic programming approach. We apply our theoretical results to exemplary, representative dynamic portfolio choice models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.