Abstract

This article is to address the optimized high-order consensus problem of nonlinear canonical dynamic multiagent systems (MASs). Since every agent contains the multiple state variables having differential relation, sliding-mode control (SMC) mechanism is employed for reaching the high-order consensus. Then, reinforcement learning (RL) under identifier–critic–actor architecture is performed for optimizing this consensus control. As a result, RL strategy and SMC mechanism are combined to achieve the optimized high-order consensus control. Compared with the traditional optimal control, on the one hand, the actor and critic RL updating laws are significantly simple and can also make the sufficient training for the adaptive parameters so that the condition of persistence excitation is released; on the other hand, the optimized consensus scheme does not require the complete dynamic acknowledge because the adaptive identifier strategy is integrated into the RL design for compensating the unknown dynamic. As a consequence, this proposed optimized control can smoothly steer the high-order MAS to achieve the consensus of multiple system states. Eventually, both theory and simulation certify feasibility of this control method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call