Abstract
In this paper, a neural adaptive optimal control strategy is proposed for strict-feedback nonlinear multiagent systems (MASs) with full-state constraints and immeasurable states. In order to solve the Hamilton–Jacobi-Bellman (HJB) equation, the reinforcement learning (RL) is employed with the actor-critic architecture. Different from the existing results for the optimized backstepping technique, by introducing the command filter technique into the value function, the condition that the derivative of the virtual controller is bounded by a constant can be released. Moreover, in the case of considering full-state constraints and immeasurable states, the tracking control problem of MASs can be solved without violating constraints, and the resource consumption can be reduced. We estimate and limit the states of MASs by employing the state observer and the novel mapping function, respectively. By using the Lyapunov stability theorem, it verifies that all signals in the closed-loop system are uniformly ultimately bounded (UUB) and the tracking error converges to a small neighborhood of the origin. Finally, a simulation example is given to illustrate the validity of the proposed control strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.