Abstract

A novel exploration mechanism for addressing the issues of slow convergence speed, excessive redundancy and low path quality associated with the inherent randomness of Rapidly-exploring Random Tree (RRT)’s sampling approach is presented. First, the node exploration process of the RRT algorithm is modeled as a Markov Decision Process (MDP) by designing the action space and reward function. Subsequently, a novel node exploration mechanism based on RRT-Connect is developed by integrating environmental feedback information. Finally, the combination of Deep Q-Network (DQN) and RRT is achieved through the proposed DQN-RRT algorithm, which incorporates the structure and training method of DQN. Compared to the traditional RRT algorithm, the proposed algorithm balances planning autonomy, reduces search redundancy and efficient obstacle avoidance. Simulations are given to validate the performance optimization function of the proposed algorithm in RRT path planning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call