Abstract

This paper develops a general purpose numerical method to compute the feedback Nash equilibria in dynamic games. Players' feedback strategies are first approximated by neural networks which are then trained online by parallel genetic algorithms to search over all time-invariant equilibrium strategies synchronously. To eliminate the dependence of training on the initial conditions of the game, the players use the same stationary feedback policies (the same networks), to repeatedly play the game from a number of initial states at any generation. The fitness of a given feedback strategy is then computed as the sum of payoffs over all initial states. The evolutionary equilibrium of the game between the genetic algorithms is the feedback Nash equilibrium of the dynamic game. An oligopoly model with investment is approximated as a numerical example.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call