Abstract

This paper studies the backward-forward linear-quadratic-Gaussian (LQG) games with major and minor agents (players). The state of major agent follows a linear backward stochastic differential equation (BSDE) and the states of minor agents are governed by linear forward stochastic differential equations (SDEs). The major agent is dominating as its state enters those of minor agents. On the other hand, all minor agents are individually negligible but their state-average affects the cost functional of major agent. The mean-field game in such backward-major and forward-minor setup is formulated to analyze the decentralized strategies. We first derive the consistency condition via an auxiliary mean-field SDEs and a 3×2 mixed backward-forward stochastic differential equation (BFSDE) system. Next, we discuss the wellposedness of such BFSDE system by virtue of the monotonicity method. Consequently, we obtain the decentralized strategies for major and minor agents which are proved to satisfy the e-Nash equilibrium property.

Highlights

  • The dynamic optimization of large-population system has attracted extensive research attentions from academic communities

  • Its most significant feature is the existence of numerous insignificant agents, denoted by {Ai}Ni=1, whose dynamics and cost functionals are coupled via their state-average

  • Unlike other mean-field game literature: (1) Here, the major and minor agents are endowed with different objective patterns: the major agent aims to fulfill some prescribed future target, it is facing a “backward” LQ problem by minimizing the initial endowment

Read more

Summary

Introduction

The dynamic optimization of (linear) large-population system has attracted extensive research attentions from academic communities. Its most significant feature is the existence of numerous insignificant agents, denoted by {Ai}Ni=1, whose dynamics and (or) cost functionals are coupled via their state-average. To design lowcomplexity strategies for large-population system, one efficient method is mean-field game (MFG) which enables us to derive the decentralized strategies. Interested readers may refer to Lasry and Lions (2007), Gueant et al (2010) for the motivation and methodology, and Andersson and Djehiche (2011), Bardi (2012), Bensoussan et al. (2016), Buckdahn et al (2009a, 2009b, 2010, 2011), Carmona and Delarue (2013), Huang et al (2006, 2007, 2012), Li and Zhang (2008) for recent progress of MFG theory.

Objectives
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.