Abstract

We study the behavior of a stochastic variant of replicator dynamics in two-agent zero-sum games. Inspired by the well studied deterministic replicator equation, this model gives arguably a more realistic description of real world settings and, as we demonstrate here, exhibits totally distinct behavior when compared to its deterministic counterpart which is known to be recurrent. In more detail, we characterize the statistics of such systems by their invariant measures which can be shown to be entirely supported on the boundary of the space of mixed strategies. Depending on the noise strength we can furthermore characterize these invariant measures by finding accumulation of mass at specific parts of the boundary. In particular, regardless of the magnitude of noise, we show that any invariant probability measure is a convex combination of Dirac measures on pure strategy profiles, which correspond to vertices/corners of the agents’ simplices. Thus, in the presence of stochastic perturbations, even in the most classic zero-sum settings, such as Matching Pennies, we observe a stark disagreement between the axiomatic prediction of Nash equilibrium and the evolutionary emergent behavior derived by an assumption of stochastically adaptive, learning agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call