Abstract

Over the past decades, unmanned aerial vehicles (UAVs) have been widely used in both military and civilian fields. In these applications, flocking motion is a fundamental but crucial operation of multi-UAV systems. Traditional flocking motion methods usually designed for a specific environment. However, the real environment is mostly unknown and stochastic, which greatly reduces the practicality of these methods. In this article, deep reinforcement learning (DRL) is used to realize the flocking motion of multi-UAV systems. Considering that the sim-to-real problem restricts the application of DRL to the flocking motion scenario, a digital twin (DT)-enabled DRL training framework is proposed to solve this problem. The DRL model can learn from DT and be quickly deployed on the real-world UAV with the help of DT. Under this training framework, this article proposes an actor–critic DRL algorithm, named behavior-coupling deep deterministic policy gradient (BCDDPG), for the flocking motion problem, which is inspired by the flocking behavior of animals. Extensive simulations are conducted to evaluate the performance of BCDDPG. Simulation results show that BCDDPG achieves a higher average reward and performs better in terms of arrival rate and collision rate compared with the existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call