Abstract

In this paper, distributed flocking strategies have been exploited for multi-agent two-player zero-sum games. Two main challenges are addressed, i.e. (a) handling system uncertainties and disturbances, and (b) achieving optimality. Adopting the emerging Approximate Dynamic Programming (ADP) technology, a novel distributed adaptive flocking design is proposed to optimize the multi-agent two-player zero-sum games even when the system dynamics and disturbances are unknown. First, to evaluate the multi-agent flocking performance and effects from disturbances, a novel flocking cost function is developed. Next, an innovative type of online neural network (NN) based identifier is proposed to approximate the multi-agent zero-sum game system dynamics effectively. Subsequently, another novel neural network (NN) is proposed to approximate the optimal flocking cost function by using the Hamilton-Jacobi-Isaacs (HJI) equation in a forward in time manner. Moreover, a novel additional term is designed and included into the NN update law to relax the stringent requirement of initial admissible control. Eventually, the distributed adaptive optimal flocking design is obtained by using the learnt Multi-agent zero-sum games system dynamics and approximated optimal flocking cost function. Simulation results demonstrate the effectiveness of proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call