Abstract

The next generation mobile networks LTE and LTE-A are all-IP based networks. In such IP based networks, the issue of Quality of Service (QoS) is becoming more and more critical with the increase in network size and heterogeneity. In this paper, a Reinforcement Learning (RL) based framework for QoS enhancement is proposed. The framework achieves the coverage/capacity optimization by adjusting the scheduling strategy. The proposed self-optimization algorithm uses coverage/capacity compromise in Packet Scheduling (PS) to maximize the capacity of an eNB subject to the condition that minimum coverage constraint is not violated. Each eNB has an associated agent that dynamically changes the scheduling parameter value of an eNB. The agent uses the RL technique of Fuzzy Q-Learning (FQL) to learn the optimal scheduling parameter. The learning framework is designed to operate in an environment with varying traffic, user positions, and propagation conditions. A comprehensive analysis on the obtained simulation results is presented, which shows that the proposed approach can significantly improve the network coverage as well as capacity in terms of throughput. DOI: http://dx.doi.org/10.5755/j01.eee.20.9.4786

Highlights

  • The mobile networks have undergone an enormous growth in terms of size and complexity during the last few years

  • This paper examines the use of Fuzzy QL (FQL) to optimize the Packet Scheduling (PS) to achieve maximum eNB capacity while satisfying the minimum coverage constraint. α-fair scheduling is the type of PS used in this work [20]. α-fair scheduler provides a generalization of the well-known schedulers including Proportional Fair (PF), Max Throughput (MTP), and Max-Min Fair (MMF) schedulers

  • For traffic value of 5 arrivals/sec, it can be observed that only a marginal improvement in Average Bitrate (ABR) as α scheduler tends to be even more fair so that mean Access Probability (AP) does not fall below 90 %

Read more

Summary

INTRODUCTION

The mobile networks have undergone an enormous growth in terms of size and complexity during the last few years. The recent research on LTE self-optimization has mainly focused on dynamically optimising Radio Resource Management (RRM) parameters, like: resource and bandwidth allocation [10], Inter-Cell Interference Coordination (ICIC) [11], [12] and load balancing [13], [14] It has been shown in [9] and [15] that rules of Fuzzy Logic Controller (FLC) can be optimized using Q-Learning (QL). The contribution of this paper is the proposal of a novel self-optimization procedure for coverage/capacity optimization based on PS This approach has the advantage of being scalable with increasing network size; this is because, adjusting α-parameter of an eNB has very little impact on the KPIs of its neighbours [21]. While at the same time the coverage given as number of users served changes from a minimum to maximum value as PF scheduler tries to achieve fairness

QL FOR SELF-OPTIMIZATION IN LTE
FUZZY Q-LEARNING
COMPONENTS OF FQL RL SYSTEM
KM to 2 KM
Simulation Scenario
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.