Abstract

In this paper, we present an energy-efficient Self Organizing Network (SON) architecture based on a tunable eNodeB (eNB) antenna tilt design for macrocells in a mobile network environment. This is an imperative element of mobility management in high speed and low latency wireless networks. The SON architecture follows a fully distributed approach with optional network information exchange with neighboring cells and core network. Antenna tilt directly affects its radiation pattern thus changes in eNB antenna tilt can be used to optimize cell coverage and reduce interference in mobile networks. We apply and compare two reinforcement machine learning techniques for optimizing the eNB antenna tilts, i.e., Deep Q-learning using Artificial Neural Network (ANN) and a simple Stochastic Cellular Learning Automata (SCLA). ANN is well known for its ability to learn from a vast number of inputs, while the stochastic learning technique relies on a simple action based probability vector updated based on system feedback. Neighboring cells for any one cell in the network environment are selected based on their separation distance and antenna orientation. We validate the data call performance of the network for edge users as they directly impact the Quality of Service (QoS) in the mobile environment. Our simulated results show that ANN performs better for edge users as compared to SCLA. The model also satisfies the SON requirement of scalability and agility. This work is a follow-up to our earlier work, where we showed that SCLA performs better than Q-learning in a similar network environment and optimizing strategy due to its low complexity, but within the same Q-learning algorithm more input learning parameters gave better performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call