Abstract

Balancing traffic among cellular networks is very challenging due to many factors. Nevertheless, the explosive growth of mobile data traffic necessitates addressing this problem. Due to the problem complexity, data-driven self-optimized load balancing techniques are leading contenders. In this work, we propose a comprehensive deep reinforcement learning (RL) framework for steering the cell individual offset (CIO) as a means for mobility load management. The state of the LTE network is represented via a subset of key performance indicators (KPIs), all of which are readily available to network operators. We provide a diverse set of reward functions to satisfy the operators' needs. For a small number of cells, we propose using a deep Q-learning technique. We then introduce various enhancements to the vanilla deep Q-learning to reduce bias and generalization errors. Next, we propose the use of actor-critic RL methods, including Deep Deterministic Policy Gradient (DDPG) and twin delayed deep deterministic policy gradient (TD3) schemes, for optimizing CIOs for a large number of cells. We provide extensive simulation results to assess the efficacy of our methods. Our results show substantial improvements in terms of downlink throughput and non-blocked users at the expense of negligible channel quality degradation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.