Abstract

By providing wireless connectivity where network infrastructure is not available, or simply complement the conventional base stations, Unmanned Aerial Vehicles (UAVs) are expected to play a vital role in 6G networks. UAVs organized as a Flying Ad-hoc Network (FANET) can process jobs received by ground devices through vertical offload. Moreover, the load among UAVs can be balanced by leveraging the horizontal offload among the UAVs in the FANET, with the aim of reducing the processing delay. However, offloading decisions inside a FANET have to be dynamically adapted to the current state of the FANET and the number of job requests coming from the underlying geographical area. As the number of UAVs in the FANET increases, many Deep Reinforcement Learning (DRL) approaches fall short to achieve reasonable performance due to the exponential increase in the action space. For this reason, in this paper we propose a framework to manage the FANET using DRL and compare the performance of both centralized single-agent and distributed multi-agent approaches to choose the best offloading probabilities to forward incoming jobs to neighboring UAVs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call