Abstract

Collision-free path planning is a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. In this paper, we consider the design of UAV routing policies using multi-agent reinforcement learning, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3M-RL,</i> for flight planning, where multiple vehicles need to avoid collisions with each other while moving towards their destinations. In the system we consider, each UAV makes decisions based on local observations, and does not communicate with other UAVs. The algorithm trains a routing policy using an Actor-Critic neural network with multi-resolution observations, including detailed local information and aggregated global information based on mean-field. The algorithm tackles the curse-of-dimensionality problem in multi-agent reinforcement learning and provides a scalable solution. We test our algorithm in different complex scenarios in both 2D and 3D space and our simulation results show that 3M-RL result in good routing policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call