Abstract

In view of the problems existing in the path planning algorithms of unmanned vehicles, such as low search efficiency, slow convergence speed and easy to fall into the local optimal. Based on the characteristics of route planning for unmanned vehicles, this paper introduces Q-Learning into the traditional ant colony algorithm to enhance the learning ability of the algorithm in dynamic environment, so as to improve the overall efficiency of route search. By mapping pheromones into Q values in Q-learning, rapid search in complex environments is realized, and a collection-free path satisfying constraints is quickly found. The results of case analysis show that compared with the traditional ant colony algorithm and the improved ant colony algorithm considering reward and punishment factors, the improved ant colony algorithm based on Q-Learning can effectively reduce the number of iterations, shorten the path optimization time and path length and other performance indicators, and has many advantages in jumping out of the local optimal, improving the global search ability and improving the convergence speed, and has good adaptability and robustness in complex environments. It ensures the safety and stability of unmanned vehicles in complex environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call