In this article, a novel Robust Integral of the Sign of the Error (RISE)-based Actor/Critic reinforcement learning control structure is established, which addresses the trajectory tracking control problem, optimality performance and observer effectiveness of a three mecanum wheeled mobile robot to be subject to slipping effect. The Actor–Critic Reinforcement Learning algorithm with a discount factor is introduced in integration with the Nonlinear RISE feedback term, which is designated to eliminate the dynamic uncertainties/disturbances from the affine nominal system. On the other hand, the persistence of excitation (PE) condition can be ignored due to the presence of RISE term. Stability analyses in two proposed theorems demonstrate all the signals in the closed-loop system and learning weights would be Uniformly Ultimate Boundedness (UUB) and the consideration of the system under the impact of RISE that can promote the tracking effectiveness. In conclusion, simulation results are shown in conjunction with the comparison to illustrate the powerful capability as well as the economy in control resources of the proposed algorithm.
Read full abstract