Abstract

In this study, the leader's behavior learning problem is investigated for a class of leader-follower systems, where the leader's behavior is assumed not a priori known to all followers that are autonomous. A multi-player nonzero-sum differential game is introduced to indicate the control interaction between agents, where the leader's behavior is modelled by an unknown cost function. The autonomous followers aim to retrieve the weighting matrix in the leader's cost function collaboratively. A distributed online adaptive inverse differential game (IDG) approach to the leader's behavior learning is proposed for the autonomous followers. Specifically, a concurrent learning (CL) based adaptive law and an interactive game controller are first developed for each autonomous follower to learn the leader's feedback gain matrix online, while the feedback Nash equilibrium of the game can be achieved. Then, a linear matrix inequality (LMI) optimization problem is formulated to determine the weighting matrix of the leader's cost function for each autonomous follower. The proposed method simply requires that all followers share their interactive feedback gain matrices, not their private intents, and use only the system state data, not both the system state data and the leader's control input data. The main advantages of the proposed method are that it can be implemented online without requiring the persistent excitation condition and needs less computational power for all followers. Finally, numerical simulations are provided to demonstrate the effectiveness and feasibility of the developed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call