Abstract

The topic of optimal antisynchronization control for unknown multiagent systems (MASs) with cooperative-competitive (i.e.,coopetition) interaction is challenging because of the complex connection characteristics and the coupling among agents. This paper proposes a reinforcement learning Algorithm based on coopetition strength (CS) to achieve optimal antisynchronization in unknown MASs. Firstly, an innovative CS function is explored, and the local state error information of the agents can be redefined. Furthermore, we propose a novel strategy iteration method to approximate the agent’s optimal control policy. Meanwhile, the proposed algorithm’s convergence analysis is presented, which is based on the Lyapunov stability and functional theorem. In the implementation of a data-based control policy, a network structure of actor-critic (AC) is designed. To improve the robustness of the control policy, the target network and experience replay (ER) are introduced in the training process. Finally, the algorithm’s effectiveness is validated using comparable numerical simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call