We define a class of dynamic Markovian games, directional dynamic games (DDG), where directionality is represented by a strategy-independent partial order on the state space. We show that many games are DDGs, yet none of the existing algorithms are guaranteed to find any Markov perfect equilibrium (MPE) of these games, much less all of them. We propose a fast and robust generalization of backward induction we call state recursion that operates on a decomposition of the overall DDG into a finite number of more tractable stage games , which can be solved recursively. We provide conditions under which state recursion finds at least one MPE of the overall DDG and introduce a recursive lexicographic search (RLS) algorithm that systematically and efficiently uses state recursion to find all MPE of the overall game in a finite number of steps. We apply RLS to find all MPE of a dynamic model of Bertrand price competition with cost-reducing investments which we show is a DDG. We provide an exact non-iterative algorithm that finds all MPE of every stage game, and prove there can be only 1, 3, or 5 of them. Using the stage games as building blocks, RLS rapidly finds and enumerates all MPE of the overall game. RLS finds a unique MPE for an alternating move version of the leapfrogging game when technology improves with probability 1, but in other cases, and in any simultaneous move version of the game, it finds a huge multiplicity of MPE that explode exponentially as the number of possible cost states increases.
Read full abstract