Abstract

This paper presents the computation of feasible paths for mobile robots in known and unknown environments using a QAPF learning algorithm. Q-learning is a reinforcement learning algorithm that has increased in popularity in mobile robot path planning in recent times, due to its self-learning capability without requiring a priori model of the environment. However, Q-learning shows slow convergence to the optimal solution, notwithstanding such an advantage. To address this limitation, the concept of partially guided Q-learning is employed wherein, the artificial potential field (APF) method is utilized to improve the classical Q-learning approach. Therefore, the proposed QAPF learning algorithm for path planning can enhance learning speed and improve final performance using the combination of Q-learning and the APF method. Criteria used to measure planning effectiveness include path length, path smoothness, and learning time. Experiments demonstrate that the QAPF algorithm successfully achieves better learning values that outperform the classical Q-learning approach in all the test environments presented in terms of the criteria mentioned above in offline and online path planning modes. The QAPF learning algorithm reached an improvement of 18.83% in path length for the online mode, an improvement of 169.75% in path smoothness for the offline mode, and an improvement of 74.84% in training time over the classical approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call