Abstract
Due to the complexity of interactive environments, dynamic obstacle avoidance path planning poses a significant challenge to agent mobility. Dynamic path planning is a complex multi-constraint combinatorial optimization problem. Some existing algorithms easily fall into local optimization when solving such problems, leading to defects in convergence speed and accuracy. Reinforcement learning has certain advantages in solving decision sequence problems in complex environments. A Q-learning algorithm is a reinforcement learning method. In order to improve the value evaluation of the algorithm in solving practical problems, this paper introduces the priority weight into the Q-learning algorithm. The improved algorithm is compared with existing algorithms and applied to dynamic obstacle avoidance path planning. Experiments show that the improved algorithm dramatically improves the convergence speed and accuracy and increases the value evaluation. The improved algorithm finds the shortest path of 16 units in 27 seconds.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.