Abstract

This paper contributes to the theoretical and numerical analysis of discrete time dynamic principal-agent problems with continuous choice sets. We first provide a new and simplified proof for the recursive reformulation of the sequential dynamic principal-agent relationship. Next we prove the existence of a unique solution for the principal's value function, which solves the dynamic programming problem in the recursive formulation. By showing that the Bellman operator is a contraction mapping, we also obtain a convergence result for the value function iteration. To compute a solution for the problem, we have to solve a collection of static principal{agent problems at each iteration. Under the assumption that the agent's expected utility is a rational function of his action, we can transform the bi-level optimization problem into a standard nonlinear program. The final results of our solution method are numerical approximations of the policy and value functions for the dynamic principal-agent model. We illustrate our solution method by solving variations of two prominent social planning models from the economics literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.