Abstract

In this paper, we propose a reinforcement learning structure for auto-tuning PID gains by solving an optimal tracking control problem for robot manipulators. Capitalizing on the actor–critic framework implemented by neural networks, we achieve optimal tracking performance, estimating unknown system dynamics simultaneously. The critic network is used to approximate the cost function, which serves as an indicator of control performance. With feedback from the critic, the actor network learns time-varying PID gains over time to optimize control input, thereby steering the system toward optimal performance. Furthermore, we utilize Lyapunov’s direct method to demonstrate the stability of the closed-loop system. This approach provides an analytical procedure for a stable robot manipulator system to systematically adjust PID gains, bypassing the ad-hoc and painstaking process. The resultant actor–critic PID-like control exhibits stable adaptive and learning capabilities while maintaining a simple structure and inexpensive online computational demands. Numerical simulations underscore the effectiveness and advantages of the proposed actor–critic neural network PID control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call