Abstract

In this paper, subject to completely unknown system dynamics and input constraints, a reinforcement learning-based finite-time trajectory tracking control (RLFTC) scheme is innovatively created for an unmanned surface vehicle (USV) by combining actor-critic reinforcement learning (RL) mechanism with finite-time control technique. Unlike previous RL-based tracking which requires infinite-time convergence thereby rather sensitive to complex unknowns, an actor-critic finite-time control structure is created by employing adaptive neural network identifiers to recursively update actor and critic, such that learning-based robustness can be sufficiently enhanced. Moreover, deduced from the Bellman error formulation, the proposed RLFTC is directly optimized in a finite-time manner. Theoretical analysis eventually shows that the proposed RLFTC scheme can ensure semi-global practical finite-time stability (SGPFS) for a closed-loop USV system and tracking errors converge to an arbitrarily small neighborhood of the origin in a finite time, subject to optimal cost. Both mathematical simulation and virtual-reality experiments demonstrate remarkable effectiveness and superiority of the proposed RLFTC scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call