Abstract
This paper presents a relationship between evolutionary game dynamics and distributed recency-weighted Monte Carlo learning. After reviewing some existing theories of replicator dynamics and agent-based Monte Carlo learning, we provide proofs of the formulation-level equivalence between these two models. The relationship will be revealed not only from a theoretical viewpoint, but also by computational simulations of the models. As a consequence, macro dynamic patterns generated by distributed micro-decisions can be explained by parameters defined at an individual level. In particular, given the equivalent formulations, we investigate how the rate of agents’ recency weighting in learning affects the emergent evolutionary game dynamic patterns. An increase in this rate negatively affects the inertia, making the evolutionary stability condition more strict, and positively affecting the evolutionary speed toward equilibrium.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have