Abstract

Coverage path planning (CPP) for unmanned aerial vehicles (UAVs) plays a significant role in intelligent distributed surveillance systems. However, due to poor cooperation, most existing CPP methods may cause strongly overlapped trajectories, missing areas, or even collisions in uncertain and complex environments, leading to long task completion time and low coverage efficiency. To this end, in this paper we propose a novel multi-UAV distributed online cooperation (MDOC) CPP method that aims to minimize task completion time. Moreover, this method allows UAVs to quickly respond to unknown obstacles and complex emergencies, such as UAV breakdown or communication interruption. To establish close cooperation between UAVs, we propose an efficient environmental information map (EI-map) fusion technique that enables them to obtain global exploration in real-time in a cooperative manner. Then we innovatively develop a distributed cooperative deep Q-learning (DCDQN) algorithm to obtain UAVs' coverage paths online that are determined by minimizing task time and avoiding overlaps, missing areas, and collisions. Specifically, attributing to the fused EI-map, we expand the state space of DCDQN to collect sufficient observations and design a novel cooperative learning pattern to efficiently plan the path for global optimization. Simulation results show that our method outperforms the state-of-the-art in task completion time and coverage efficiency, especially in uncertain and complex environments. In addition, we validate that our method can efficiently complete full coverage even in emergencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call