Abstract

A fixed-time trajectory tracking control method for uncertain robotic manipulators with input saturation based on reinforcement learning (RL) is studied. The designed RL control algorithm is implemented by a radial basis function (RBF) neural network (NN), in which the actor NN is used to generate the control strategy and the critic NN is used to evaluate the execution cost. A new nonsingular fast terminal sliding mode technique is used to ensure the convergence of tracking error in fixed time, and the upper bound of convergence time is estimated. To solve the saturation problem of an actuator, a nonlinear antiwindup compensator is designed to compensate for the saturation effect of the joint torque actuator in real time. Finally, the stability of the closed-loop system based on the Lyapunov candidate is analyzed, and the timing convergence of the closed-loop system is proven. Simulation and experimental results show the effectiveness and superiority of the proposed control law.

Highlights

  • R OBOTIC manipulators are widely used in fields of military, manufacturing industry, medical and other hazardous environment fields

  • Many researchers have focus on the design of finite time control algorithms for mechanical systems

  • 4) Within the actor-critic reinforcement learning (RL) framework, the designed controller makes the fixed-time convergence of trajectory tracking error, the parameter projection algorithm is applied to ensure the boundedness of the weight vector of neural networks (NN), which provides a rigorous argument for the stability proof of the closed-loop system

Read more

Summary

INTRODUCTION

R OBOTIC manipulators are widely used in fields of military, manufacturing industry, medical and other hazardous environment fields. This paper designs a fixed-time control algorithm for the uncertain robotic manipulators control system that combines the non-singular terminal sliding mode control with the RL method. A control method based on RL is proposed, which can obtain the optimized control strategy based on state information and reduce the design cost. 4) Within the actor-critic RL framework, the designed controller makes the fixed-time convergence of trajectory tracking error, the parameter projection algorithm is applied to ensure the boundedness of the weight vector of NN, which provides a rigorous argument for the stability proof of the closed-loop system.

PROBLEM STATEMENT
Preliminary
Critic Neural Network Design for Reinforcement Learning
Actor Neural Network-Based Controller Design
Numerical Simulations and Comparisons
Experimental Results and Performance Validations
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.