Abstract

In this paper, a neural adaptive optimal control strategy is proposed for strict-feedback nonlinear multiagent systems (MASs) with full-state constraints and immeasurable states. In order to solve the Hamilton–Jacobi-Bellman (HJB) equation, the reinforcement learning (RL) is employed with the actor-critic architecture. Different from the existing results for the optimized backstepping technique, by introducing the command filter technique into the value function, the condition that the derivative of the virtual controller is bounded by a constant can be released. Moreover, in the case of considering full-state constraints and immeasurable states, the tracking control problem of MASs can be solved without violating constraints, and the resource consumption can be reduced. We estimate and limit the states of MASs by employing the state observer and the novel mapping function, respectively. By using the Lyapunov stability theorem, it verifies that all signals in the closed-loop system are uniformly ultimately bounded (UUB) and the tracking error converges to a small neighborhood of the origin. Finally, a simulation example is given to illustrate the validity of the proposed control strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call