Abstract

AbstractProportional–integral–derivative (PID) control is the most widely used in industrial control, robot control, and other fields. However, traditional PID control is not competent when the system cannot be accurately modeled and the operating environment is variable in real time. To tackle these problems, we propose a self‐adaptive model‐free SAC‐PID control approach based on reinforcement learning for automatic control of mobile robots. A new hierarchical structure is developed, which includes the upper controller based on soft actor‐critic (SAC), one of the most competitive continuous control algorithms, and the lower controller based on incremental PID controller. SAC receives the dynamic information of the mobile robot as input and simultaneously outputs the optimal parameters of incremental PID controllers to compensate for the error between the path and the mobile robot in real time. In addition, the combination of 24‐neighborhood method and polynomial fitting is developed to improve the adaptability of SAC‐PID control method to complex environment. The effectiveness of the SAC‐PID control method is verified with several different difficulty paths both on Gazebo and real mecanum mobile robot. Furthermore, compared with fuzzy PID control, the SAC‐PID method has merits of strong robustness, generalization, and real‐time performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.