Abstract
The model-based power allocation has been investigated for decades, but this approach requires mathematical models to be analytically tractable and it has high computational complexity. Recently, the data-driven model-free approaches have been rapidly developed to achieve near-optimal performance with affordable computational complexity, and deep reinforcement learning (DRL) is regarded as one such approach having great potential for future intelligent networks. In this paper, a dynamic downlink power control problem is considered for maximizing the sum-rate in a multi-user wireless cellular network. Using cross-cell coordinations, the proposed multi-agent DRL framework includes off-line and on-line centralized training and distributed execution, and a mathematical analysis is presented for the top-level design of the near-static problem. Policy-based REINFORCE, value-based deep Q-learning (DQL), actor-critic deep deterministic policy gradient (DDPG) algorithms are proposed for this sum-rate problem. Simulation results show that the data-driven approaches outperform the state-of-art model-based methods on sum-rate performance. Furthermore, the DDPG outperforms the REINFORCE and DQL in terms of both sum-rate performance and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.