Abstract

In this paper, we study a class of games regularized by relative entropy where the players’ strategies are coupled through a random environment. Besides existence and uniqueness of equilibria for such games, we prove, under different sets of hypotheses that the marginal laws of the corresponding mean-field Langevin systems can converge toward the games’ equilibria. As an application, we show that dynamic games fall in this framework by considering the time horizon as environment. Concerning applications, our results allow analysis of stochastic gradient descent algorithms for deep neural networks in the context of supervised learning and for generative adversarial networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call