Abstract

An Unmanned Aerial Vehicles (UAVs) related research hotspot appertains to the autonomy, and as a filed extension, the autonomous path planning is also of concern. In allusion to the demands of path planning in dynamic environments, a hybrid double-deck joint path planning strategy based on deep Q-network and Q-learning (D3Q) is proposed in this paper. In comparison to the single deep Q-network (DQN) for path planning in dynamic environments, the proposed D3Q utilizes two algorithms to handle static and dynamic obstacles, respectively, which refrains from the problem of poor network fitting when only employing DQN. Furthermore, a heuristic fish (HF) algorithm is presented as a prior strategy to assist D3Q in exploring the environments so as to speed up the training. Simulation results demonstrate that our advocated approach has an excellent performance in performing dynamic path planning in different scenarios and can generate relatively shorter and more dependable trajectories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call