Abstract

This article focuses on the optimized adaptive leader–follower consensus control problem for high-order nonlinear multi-agent systems (MASs) with prescribed performance and system uncertainties. A finite-time scaling function is introduced to prescribe not only steady-state accuracy but also settling time, which circumvents the initial condition dependence. By integrating integral reinforcement learning (IRL) and experience replay (ER) into backstepping design procedures, an optimized adaptive control scheme is developed. With the scheme, no system dynamic identifier is involved, and the persistence excitation requirements are checked by a simplified condition. It is proved that all the signals of the closed-loop system are bounded, and consensus error evolves with user-prescribed behavior. Finally, the effectiveness of the proposed scheme is validated by simulation results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call