Abstract

Dynamic obstacle avoidance is a classic problem in robot control, which involves the ability of a robot to avoid obstacles in the environment and reach its destination. Among various path planning algorithms, the dynamic obstacle avoidance issue may be resolved using the reinforcement learning algorithm Q-learning. This article provides a comprehensive review of the recent research progress and achievements in the field of dynamic obstacle avoidance, through the analysis and improvement of the Q-learning algorithm. The article begins by introducing the background and research status of dynamic obstacle avoidance, followed by a detailed exposition of the principles and implementation of the Q-learning algorithm. Subsequently, the shortcomings of the Q-learning algorithm are analyzed, and several improvement measures are proposed, such as combining deep learning with Q-learning, and using recombination Q-learning. Finally, the article summarizes the current application status of the Q-learning algorithm in dynamic obstacle avoidance and proposes future research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call