Abstract

Large Unmanned Aerial Vehicle (UAV) clusters, containing hundreds of UAVs, have widely been used in the modern world. Therein, mission planning is the core of large UAV cluster collaborative systems. In this paper, we propose a mission planning method by introducing the Simple Attention Model (SAM) into Dynamic Information Reinforcement Learning (DIRL), named DIRL-SAM. To reduce the computational complexity of the original attention model, we derive the SAM with a lightweight interactive model to rapidly extract high-dimensional features of the cluster information. In DIRL, dynamic training conditions are considered to simulate different mission environments. Meanwhile, the data expansion in DIRL guarantees the convergence of the model in these dynamic environments, which improves the robustness of the algorithm. Finally, the simulation experiment results show that the proposed method can adaptively provide feasible mission planning schemes with second-level solution speed and that it exhibits excellent generalization performance in large-scale cluster planning problems.

Highlights

  • Unmanned aerial vehicle (UAV) clusters have been widely used to perform various complex missions in military and civil fields, such as plant protection, mobile signal service, load transportation service, target detection, and strike [1–6]

  • Parameter settings of the contrast optimization algorithms: We compare our method with two effective heuristic optimization algorithms: genetic algorithm (GA) and particle swarm optimization algorithm (PSO) [39,40]

  • It can prove that the Dynamic Information Reinforcement Learning (DIRL)-Simple Attention Model (SAM) can adaptively allocate UAV groups for each mission in real time and is a practical algorithm to solve large UAV cluster mission planning problems

Read more

Summary

Introduction

Unmanned aerial vehicle (UAV) clusters have been widely used to perform various complex missions in military and civil fields, such as plant protection, mobile signal service, load transportation service, target detection, and strike [1–6]. Based on the constructing optimization model and the objective function, choosing special mathematical methods to solve the problem, i.e., gradient descent, dynamic programming algorithm. We address the large UAV cluster collaborative mission planning problem, where the cluster needs to adaptively assign reasonable UAV subgroups for completing many different missions in real-time To this end, a fast and robust method is proposed, named dynamical information reinforcement learning (DIRL) with the simple attention model (SAM). The novel DIRL is proposed by importing mission information in the UAV data during the dynamical training process, i.e., the mission’s requirement constraints, environment influence factor, location, and the weights between different objective functions. The resulting DIRL-SAM method can provide mission planning schemes for different missions in real time with a one-trained model, proving that it is fast and robust.

Mission and UAV Formulation
Illustration of mission planning planning in in aa large large UAV
Objective
Multiple Objective Functions of Mission
Constraint Conditions of Mission Planning
DIRL-SAM
Encoder
Process
DIRL unsupervised thedynamic dynamic
Experimental Settings
Simulation
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.