Abstract

This article addresses a distributed time-varying optimal formation protocol for a class of second-order uncertain nonlinear dynamic multiagent systems (MASs) based on an adaptive neural network (NN) state observer through the backstepping method and simplified reinforcement learning (RL). Each follower agent is subjected to only local information and measurable partial states due to actual sensor limitations. In view of the distributed optimized formation strategic needs, the uncertain nonlinear dynamics and undetectable states may jointly affect the stability of the time-varying cooperative formation control. Furthermore, focusing on Hamilton-Jacobi-Bellman optimization, it is almost incapable of directly dealing with unknown equations. Above uncertainty and immeasurability processed by adaptive state observer and NN simplified RL are further designed to achieve desired second-order formation configuration at the least cost. The optimization protocol can not only solve the undetectable states and realize the prescribed time-varying formation performance on the premise that all the errors are SGUUB, but also prove the stability and update the critics and actors easily. Through the above-mentioned approaches offer an optimal control scheme to address time-varying formation control. Finally, the validity of the theoretical method is proven by the Lyapunov stability theory and digital simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call