Abstract

As an advanced combat weapon, Unmanned Aerial Vehicles (UAVs) have been widely used in military wars. In this paper, we formulated the Autonomous Navigation Control (ANC) problem of UAVs as a Markov Decision Process (MDP) and proposed a novel Deep Reinforcement Learning (DRL) method to allow UAVs to perform dynamic target tracking tasks in large-scale unknown environments. To solve the problem of limited training experience, the proposed Imaginary Filtered Hindsight Experience Replay (IFHER) generates successful episodes by reasonably imagining the target trajectory in the failed episode to augment the experiences. The well-designed goal, episode, and quality filtering strategies ensure that only high-quality augmented experiences can be stored, while the sampling filtering strategy of IFHER ensures that these stored augmented experiences can be fully learned according to their high priorities. By training in a complex environment constructed based on the parameters of a real UAV, the proposed IFHER algorithm improves the convergence speed by 28.99% and the convergence result by 11.57% compared to the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. The testing experiments carried out in environments with different complexities demonstrate the strong robustness and generalization ability of the IFHER agent. Moreover, the flight trajectory of the IFHER agent shows the superiority of the learned policy and the practical application value of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call