Abstract

AbstractIn this study, a fuzzy reinforcement learning control (FRLC) is proposed to achieve trajectory tracking of a differential drive mobile robot (DDMR). The proposed FRLC approach designs fuzzy membership functions to fuzzify the relative position and heading between the current position and a prescribed trajectory. Instead of fuzzy inference rules, the relationship between the fuzzy inputs and actuator voltage outputs is built using a reinforcement learning (RL) agent. Herein, the deep deterministic policy gradient (DDPG) methodology consisted of actor and critic neural networks is employed in the RL agent. Simulations are conducted with considering varying slip ratio disturbances, different initial positions, and two different trajectories in the testing environment. In the meantime, a comparison with the classical DDPG model is presented. The results show that the proposed FRLC is capable of successfully tracking different trajectories under varying slip ratio disturbances as well as having performance superiority to the classical DDPG model. Moreover, experimental results validate that the proposed FRLC is also applicable to real mobile robots.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.