Abstract

This paper considers first-order continuous-time systems of mean field games in which agents are coupled via their dynamics and individual cost functions. For such games, a mean field equilibrium is developed by using the state space augmentation technique and the Nash certainty equivalence (NCE) principle. This decentralized strategy consists of two constant gains and a mean field state corresponding to the optimal strategy, where the coefficients are obtained by solving two algebraic Riccati equations and the time-varying mean field is solved from the ordinary differential equation. Note that solving these equations requires agents’ models. In order to eliminate the use of system information, two-agents model-free method is proposed. First, model-based iterative equations are proposed to approximate the feedback constant gain. By the integral reinforcement learning (IRL) technique, the system dynamics in the iterative equations are replaced by real-time data. Moreover, based on the learned solution, the coefficient of input can be calculated. Then, a model-free computational equation is developed by embedding the same collected data. Finally, the mean field state is computed depending on the obtained gains and the observed state and input trajectories from agents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call