Abstract
Mobility management is an important feature in modern wireless networks that can provide seamless and ubiquitous connectivity to mobile users. Due to the dense deployment of small cells and heterogeneous network topologies, the traditional handover control method can lead to various mobility-related problems, such as frequent handovers and handover failures. On the other hand, the mobility management’s maintenance and operation cost is also increased due to increasing node density. In this paper, an autonomous mobility management control approach is proposed to increase the mobility robustness of user equipment (UE) mobility and minimize the operational cost of mobility management. The proposed method is based on reinforcement learning, which can autonomously learn an optimal handover control policy by interacting with the environment. The function approximation approach is adopted to allow reinforcement learning to process a large state and action space. A linear function approximator is used to approximate the state-action value function. Finally, the semi-gradient state-action-reward-state-action (Sarsa) method is implemented to update the approximated state-action function and learn the optimal handover control policy. The simulation results show that the proposed method can effectively improve the mobility robustness of UE under different speed ranges. Compared with the conventional reference signal received power (RSRP) based approach, the proposed approach can reduce unnecessary handovers by about 20% and latency by 58%, while achieving near zero handover failure rate, and increasing throughput by 12%.
Highlights
To handle the growing demand for data traffic, the fifth-generation (5G) communication system integrates massive small cells and traditional macrocells as ultra-dense Heterogeneous Networks (HetNets)
LEARNING ALGORITHM FOR AUTONOMOUS HANDOVER CONTROL we present the formulation if the handover control problem into the Markov decision process (Section A) and the approximation of its value function by tile coding (Section B)
A linear function approximator known as tile coding is used to approximate the value function
Summary
To handle the growing demand for data traffic, the fifth-generation (5G) communication system integrates massive small cells and traditional macrocells as ultra-dense Heterogeneous Networks (HetNets). The simple implementation of the A3 event for 4G macrocell systems to trigger handover in 5G ultra-dense HetNets can lead to several problems [3], [4]. Q. Liu et al.: Autonomous Mobility Management for 5G Ultra-Dense HetNets via Reinforcement Learning ios and a smaller cell coverage area. Unlike our previous work in [5], this work goes a step further and uses reinforcement learning with tile coding function approximation to solve the 5G ultra-dense HetNets mobility management problem. The main contributions of this paper are summarized as follows, 1) We introduce a semi-gradient Sarsa framework to learn the optimal policy for handover triggering and target cell selection of 5G ultra-dense HetNets to enable autonomous mobility management.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.