Abstract

Stochastic stability is an important solution concept for stochastic learning dynamics in games. However, a limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We identify this limitation and develop a framework for the comparative analysis of the transient behavior of stochastic learning dynamics. We present the framework in the context of two learning dynamics: Log-linear learning (LLL) and Metropolis learning (ML). Although both of these dynamics lead to the same steady-state behavior, they correspond to different behavioral models for decision making. In this article, we propose multiple criteria to analyze and quantify the differences in the short and medium-run behaviors of stochastic dynamics. We derive upper bounds on the expected hitting time of the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles. We compare LLL and ML based on the proposed criteria and develop invaluable insights into the behavior of the two dynamics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call