Abstract

Subject to reasonable conditions, in large population stochastic dynamics games where the agents are coupled by the system's mean field through their nonlinear dynamics and cost functions, it can be shown that a best response control action for each agent exists which (i) depends only upon the individual agent's state observations and the mean field, and (ii) achieves an ∈-Nash equilibrium for the system. In this work we formulate a class of problems where each agent has only partial observations on its individual state. The main result is that the ∈-Nash equilibrium property holds where the best response control action of each agent depends upon the conditional density of its own state generated by a nonlinear filter, together with the system's mean field. Finally, it is worthwhile comparing this MFG state estimation problem to one found in the literature where there exists a major agent whose partially observed state process is independent of the control action of any individual agent; by contrast, in this work, the partially observed state process of any agent depends upon that agent's control action.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.