Abstract

This article investigates the robust optimal consensus for nonlinear multiagent systems (MASs) through the local adaptive dynamic programming (ADP) approach and the event-triggered control method. Due to the nonlinearities in dynamics, the first part defines a novel measurement error to construct a distributed integral sliding-mode controller, and the consensus errors can approximately converge to the origin in a fixed time. Then, a modified cost function with augmented control is proposed to deal with the unmatched disturbances for the event-based optimal consensus controller. Specifically, a single network local ADP structure with novel concurrent learning is presented to approximate the optimal consensus policies, which guarantees the robustness of the MASs and the uniform ultimate boundedness (UUB) of the neural network (NN) weights' estimation error and relaxes the requirement of initial admissible control. Finally, an illustrative simulation verifies the effectiveness of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call