Abstract

Recently, model-free power control approaches have been developed to achieve the near-optimal performance of cell-free (CF) massive multiple-input multiple-output (MIMO) with affordable computational complexity. In particular, deep reinforcement learning (DRL) is one of such promising techniques for realizing effective power control. In this paper, we propose a model-free method adopting the deep deterministic policy gradient algorithm (DDPG) with feedforward neural networks (NNs) to solve the downlink max-min power control problem in CF massive MIMO systems. Our result shows that compared with the conventional convex optimization algorithm, the proposed DDPG method can effectively strike a performance-complexity trade-off obtaining 1,000 times faster implementation speed and approximately the same achievable user rate as the optimal solution produced by conventional numerical convex optimization solvers, thereby offering effective power control implementations for large-scale systems. Finally, we extend the DDPG algorithm to both the max-sum and the max-product power control problems, while achieving better performance than that achieved by the conventional deep learning algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call