Abstract

With the massively growing wireless data traffic, the dense cellular network has become a significant mode for the fifth generation (5G) network. To fully utilize the benefit of the cellular network, it is primary to design optimal allocation strategy with limited network resource. In this paper, we investigate the dynamic power allocation problem in downlink cellular network based on multi-agent reinforcement learning (RL), where each base station (BS)-user (UE) is modeled as a RL agent to learn optimal power allocation policy in order to maximize the total system capacity. Due to the non-convex and large-scale characteristic of the optimization problem, the computational complexity of centralized traditional methods is unacceptable in practice. Therefore, the power allocation problem is transformed into a multi-agent RL (MARL) issue which can be solved by Deep Reinforcement Learning (DRL) method in a distributed way. We address the expandability of reward function and state space, in order to adapt the variation of network size, such as the number of BSs or UEs and the coverage area of cells. Moreover, the impacts of learning hyperparameters are evaluated for the algorithmic performance. Finally, the effectiveness and superiority of the proposed method are validated by numerical results in different scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call