Abstract

This article investigates the robust optimal consensus for nonlinear multiagent systems (MASs) through the local adaptive dynamic programming (ADP) approach and the event-triggered control method. Due to the nonlinearities in dynamics, the first part defines a novel measurement error to construct a distributed integral sliding-mode controller, and the consensus errors can approximately converge to the origin in a fixed time. Then, a modified cost function with augmented control is proposed to deal with the unmatched disturbances for the event-based optimal consensus controller. Specifically, a single network local ADP structure with novel concurrent learning is presented to approximate the optimal consensus policies, which guarantees the robustness of the MASs and the uniform ultimate boundedness (UUB) of the neural network (NN) weights' estimation error and relaxes the requirement of initial admissible control. Finally, an illustrative simulation verifies the effectiveness of the method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.