This paper mainly focuses on the development of a learning-based controller for a class of uncertain mechanical systems modeled by the Euler-Lagrange formulation. The considered system can depict the behavior of a large class of engineering systems, such as vehicular systems, robot manipulators and satellites. All these systems are often characterized by highly nonlinear characteristics, heavy modeling uncertainties and unknown perturbations, therefore, accurate-model-based nonlinear control approaches become unavailable. Motivated by the challenge, a reinforcement learning (RL) adaptive control methodology based on the actor-critic framework is investigated to compensate the uncertain mechanical dynamics. The approximation inaccuracies caused by RL and the exogenous unknown disturbances are circumvented via a continuous robust integral of the sign of the error (RISE) control approach. Different from a classical RISE control law, a tanh(·) function is utilized instead of a sign(·) function to acquire a more smooth control signal. The developed controller requires very little prior knowledge of the dynamic model, is robust to unknown dynamics and exogenous disturbances, and can achieve asymptotic output tracking. Eventually, co-simulations through ADAMS and MATLAB/Simulink on a three degrees-of-freedom (3-DOF) manipulator and experiments on a real-time electromechanical servo system are performed to verify the performance of the proposed approach.
Read full abstract