Abstract

Nowadays, lots of technical challenges emerge focusing on user association in ever-increasingly complicated 5G heterogeneous networks. With distributed multiple attribute decision making (MADM) algorithm, users tend to maximize their utilities selfishly for lack of cooperation, leading to congestion. Therefore, it is efficient to apply artificial intelligence to deal with these emerging problems, which enables users to learn with incomplete environment information. In this paper, we propose an adaptive user association approach based on multi-agent deep reinforcement learning (RL), considering various user equipment types and femtocell access mechanisms. It aims to achieve a desirable trade-off between Quality of Experience (QoE) and load balancing. We formulate user association as a Markov Decision Process. And a deep RL approach, semi-distributed deep Q-network (DQN), is exploited to get the optimal strategy. Individual reward is defined as a function of transmission rate and base station load, which are adaptively balanced by a designed weight. Simulation results reveal that DQN with adaptive weight achieves the highest average reward compared with DQN with fixed weight and MADM, which indicates it obtains the best trade-off between QoE and load balancing. Compared with MADM, our approach improves by \({4\%\sim 11\%}\), \({32\%\sim 40\%}\), \({99\%}\) in terms of QoE, load balancing and blocking probability, respectively. Furthermore, semi-distributed framework reduces computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call