Abstract

In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC3 approach based on deep deterministic policy gradient (DDPG), and three other baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call