Abstract

In stochastic dynamic games, when the number of players is sufficiently large and the interactions between agents depend on empirical state distribution, one way to approximate the original game is to introduce infinitepopulation limit of the problem. In the infinite population limit, a generic agent is faced with a so-called mean-field game. In this paper, we study discrete-time mean-field games with average-cost criteria. Using average cost optimality equation and Kakutani's fixed point theorem, we establish the existence of Nash equilibria for mean-field games under drift and minorization conditions on the dynamics of each agent. Then, we show that the equilibrium policy in the mean-field game, when adopted by each agent, is an approximate Nash equilibrium for the corresponding finite-agent game with sufficiently many agents.

Highlights

  • In this paper, we consider discrete-time mean-field games subject to average-cost criteria with Polish state and action spaces. These games arise as the infinite population limit of finite-agent dynamic games, where agents interact through the empirical distribution of their states

  • In the infinite population limit, since empirical state distribution converges to a deterministic probability measure by the law of large numbers, agents are decoupled from each other and each agent is faced with a stochastic control problem that has a constraint on the distribution of its state

  • To establish the existence of a mean-field equilibrium, we use dynamic programming principle for average-cost criterion, which is stated via average cost optimality equation (ACOE), in addition to fixed point approach that is commonly used in classical game problems

Read more

Summary

Introduction

We consider discrete-time mean-field games subject to average-cost criteria with Polish state and action spaces. These games arise as the infinite population limit of finite-agent dynamic games, where agents interact through the empirical distribution of their states. Under strong regularity conditions on system components, Biswas [5] established the existence of Nash equilibria for finite-agent games and showed that these Nash equilibria converge to mean-field equilibria in the infinite-population limit These imposed regularity conditions are in general prohibitive because they are stated in terms of a specific metric topology on the set of policies, and appear to be too strong to hold under reasonable assumptions. To establish the existence of a mean-field equilibrium, we use dynamic programming principle for average-cost criterion, which is stated via average cost optimality equation (ACOE), in addition to fixed point approach that is commonly used in classical game problems

Finite player game
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call