Abstract

An Unmanned Aerial Vehicles (UAVs) related research hotspot appertains to the autonomy, and as a filed extension, the autonomous path planning is also of concern. In allusion to the demands of path planning in dynamic environments, a hybrid double-deck joint path planning strategy based on deep Q-network and Q-learning (D3Q) is proposed in this paper. In comparison to the single deep Q-network (DQN) for path planning in dynamic environments, the proposed D3Q utilizes two algorithms to handle static and dynamic obstacles, respectively, which refrains from the problem of poor network fitting when only employing DQN. Furthermore, a heuristic fish (HF) algorithm is presented as a prior strategy to assist D3Q in exploring the environments so as to speed up the training. Simulation results demonstrate that our advocated approach has an excellent performance in performing dynamic path planning in different scenarios and can generate relatively shorter and more dependable trajectories.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.