Abstract

The goal of this chapter is to propose solutions to asymptotic forms of the search for Nash equilibria for large stochastic differential games with mean field interactions. We implement the Mean Field Game strategy, initially developed by Lasry and Lions in an analytic set-up, in a purely probabilistic framework. The roads to solutions go through a class of standard stochastic control problems followed by fixed point problems for flows of probability measures. We tackle the inherent stochastic optimization problems in two different ways. Once by representing the value function as the solution of a backward stochastic differential equation (reminiscent of the so-called weak formulation approach), and a second time using the Pontryagin stochastic maximum principle. In both cases, the optimization problem reduces to the solutions of a Forward-Backward Stochastic Differential Equation (FBSDE for short). The search for a fixed flow of probability measures turns the FBSDE into a system of equations of the McKean-Vlasov type where the distribution of the solution appears in the coefficients. In this way, both the optimization and interaction components of the problem are captured by a single FBSDE, avoiding the twofold reference to Hamilton-Jacobi-Bellman equations on the one hand, and to Kolmogorov equations on the other hand.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call