Abstract

We consider a class of mean field games in which the agents interact through both their states and controls, and we focus on situations in which a generic agent tries to adjust her speed (control) to an average speed (the average is made in a neighborhood in the state space). In such cases, the monotonicity assumptions that are frequently made in the theory of mean field games do not hold, and uniqueness cannot be expected in general. Such model lead to systems of forward-backward nonlinear nonlocal parabolic equations; the latter are supplemented with various kinds of boundary conditions, in particular Neumann-like boundary conditions stemming from reflection conditions on the underlying controled stochastic processes. The present work deals with numerical approximations of the above megntioned systems. After describing the finite difference scheme, we propose an iterative method for solving the systems of nonlinear equations that arise in the discrete setting; it combines a continuation method, Newton iterations and inner loops of a bigradient like solver. The numerical method is used for simulating two examples. We also make experiments on the behaviour of the iterative algorithm when the parameters of the model vary.

Highlights

  • The theory of mean field games, (MFGs for short), aims at studying deterministic or stochastic differential games (Nash equilibria) as the number of agents tends to infinity

  • It supposes that the rational agents are indistinguishable and individually have a negligible influence on the game, and that each individual strategy is influenced by some averages of quantities depending on the states of the other agents

  • And at approximately the same time, the notion of mean field games arose in the engineering literature, see the works of M.Y

Read more

Summary

- Introduction

The theory of mean field games, (MFGs for short), aims at studying deterministic or stochastic differential games (Nash equilibria) as the number of agents tends to infinity. If the value function is uniformly bounded from below (which is often the case even if there are interactions through the controls), this results in a relationship between the L∞ [0, T ]; L1 (Ω) -norm of the positive part of u and the L2m [0, T ] × Ω; Rd -norm of ∇xu; this observation can be used to obtain additional a priori estimates, which may be combined with the ones obtained from the maximum principle and discussed above, and with Bernstein method This strategy has been implemented in [16] and leads to the existence of a solution to (1.1) under suitable assumptions. Note that in [11], existence and uniqueness have been proved with probabilistic arguments in the case where the Hamiltonian depends separately on p and μ

A more detailed description of the considered class of MFGCs
Organization of the paper
Notations and definitions
The scheme
Solving the discrete version of the Fokker-Planck-Kolmogorov equation
The coupling cost and the average drift
Notation
Linearized Hamilton-Jacobi-Bellman equation
Linearized Kolmogorov-Fokker-Planck equation
Linearized coupling costs and average drifts
Description of the model
Behaviour of the algorithm
Queues
- References
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call