Abstract

In theory, a Markov perfect equilibrium of an infinite-horizon nonstationary dynamic game requires from the players the ability to forecast an infinite amount of data. In this paper, we prove that early strategic decisions are decoupled effectively from the tail game in nonstationary dynamic games with discounting and uniformly bounded rewards. This decoupling is formalized by the notion of a forecast horizon. In words, the first-period equilibrium strategies are invariant with respect to changes in the game parameters for periods beyond the forecast horizon. We illustrate our results in the context of dynamic games of exploitation of a common pool resource and make use of the rather natural monotonicity properties of finite-horizon equilibria.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call