Abstract

The swarm relay and power allocation policy determines the bit error rate and the energy consumption of unmanned aerial vehicles (UAVs) and can be optimized based on the network and jamming model, which is rarely known by UAVs. In this paper, we propose a multi-agent reinforcement learning (RL)-based UAV swarm communication scheme to optimize the relay selection and power allocation against jamming. Based on the network topology, channel states, previous performance and observations shared by the neighboring UAVs, this scheme formulates the policy distribution to improve the policy exploration and applies a policy learning mechanism to stabilize the learning process. Based on transfer learning, the shared swarm experiences are exploited to accelerate the initial learning and improve policy optimization. A deep RL-based scheme is proposed to mitigate the state quantization error for the rapidly changing channel states under high swarm moving speed and thus further improve the anti-jamming performance. This scheme designs a policy network with four fully connected layers to approximate the policy distribution and uses another two neural networks to estimate the average policy distribution and the expected long-term utility, respectively, to update the policy network for stabilized deep learning. We investigate the computational complexity and derive the performance bound regarding the bit error rate, the energy consumption and the utility. Simulation and experimental results verify the performance gain of our proposed schemes over related works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call