Abstract

Unmanned Aerial Vehicle (UAV) path planning research refers to the UAV automatically planning an optimal path to the destination under the corresponding environment, while avoiding collision with obstacles in this process. In order to solve the problem of 3D path planning of UAV in a dynamic environment, a heuristic dynamic reward function is designed to guide the UAV. We propose the Environment Exploration Twin Delayed Deep Deterministic Policy Gradient (EE-TD3) algorithm, which combines the symmetrical 3D environment exploration coding mechanism on the basis of TD3 algorithm. The EE-TD3 algorithm model can effectively avoid collisions, improve the training efficiency, and achieve faster convergence speed. Finally, the performance of the EE-TD3 algorithm and other deep reinforcement learning algorithms was tested in the simulation environment. The results show that the EE-TD3 algorithm is better than other algorithms in solving the 3D path planning problem of UAV.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call