Abstract

Traditional manual procedures for Coverage and Capacity Optimization are complex and time consuming due to the increasing complexity of cellular networks. This paper presents reinforcement learning strategies for self-organized coverage and capacity optimization through antenna downtilt adaptation. We analyze different learning strategies for a Fuzzy Q-Learning based solution in order to have a fully autonomous optimization process. The learning behavior of these strategies is presented in terms of their learning speed and convergence to the optimal settings. Simultaneous actions by different cells of the network have a great impact on this learning behavior. Therefore, we study a stable strategy where only one cell can take an action per network snapshot as well as a more dynamic strategy where all the cells take simultaneous actions in every snapshot. We also propose a cluster based strategy that tries to combine the benefits of both. The performance is evaluated in all three different network states, i.e. deployment, normal operation and cell outage. The simulation results show that the proposed cluster based strategy is much faster to learn the optimal configuration than one-cell-per-snapshot and can also perform better than the all-cells-per-snapshot strategy due to better convergence capabilities.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.