Abstract
In this paper, we propose a reinforcement learning structure for auto-tuning PID gains by solving an optimal tracking control problem for robot manipulators. Capitalizing on the actor–critic framework implemented by neural networks, we achieve optimal tracking performance, estimating unknown system dynamics simultaneously. The critic network is used to approximate the cost function, which serves as an indicator of control performance. With feedback from the critic, the actor network learns time-varying PID gains over time to optimize control input, thereby steering the system toward optimal performance. Furthermore, we utilize Lyapunov’s direct method to demonstrate the stability of the closed-loop system. This approach provides an analytical procedure for a stable robot manipulator system to systematically adjust PID gains, bypassing the ad-hoc and painstaking process. The resultant actor–critic PID-like control exhibits stable adaptive and learning capabilities while maintaining a simple structure and inexpensive online computational demands. Numerical simulations underscore the effectiveness and advantages of the proposed actor–critic neural network PID control.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.