Abstract

In this paper, we consider game problems played by (multi)-integrator agents, subject to external disturbances. We propose Nash equilibrium seeking dynamics based on gradient-play, augmented with a dynamic internal-model based component, which is a reduced-order observer of the disturbance. We consider single-, double- and extensions to multi-integrator agents, in a partial-information setting, where agents have only partial knowledge on the others' decisions over a network. The lack of global information is offset by each agent maintaining an estimate of the others' states, based on local communication with its neighbours. Each agent has an additional dynamic component that drives its estimates to the consensus subspace. In all cases, we show convergence to the Nash equilibrium irrespective of disturbances. Our proofs leverage input-to-state stability under strong monotonicity of the pseudo-gradient and Lipschitz continuity of the extended pseudo-gradient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.