Abstract

Radio Access Network (RAN) slicing is one of the key enablers to provide the design flexibility and enable 5G system to support heterogeneous services over a common platform (i.e., by creating a customized slice for each service). In this regard, this paper provides an analysis of a Reinforcement L

Highlights

  • The new fifth generation (5G) of mobile networks will support a wide variety of services and applications over a shared network infrastructure [1]. 5G capabilities need to adapt to the wide and dynamic variations of 5G requirements for a particular situation, as the 5G capabilities required to provide services depend on each particular use case

  • 5G is expected to provide a great variety of services ranging over three generic types: a) enhanced Mobile Broad Band, which focuses on services that require high data rate requirements, b) massive machine-type communications, which support a massive number of static or dynamic machine communications, which are only intermittently active, and c) ultra-reliable and low-latency communications (URLLC), which focus on applications requiring very low latency and high reliability, such as mission critical communications or autonomous driving and Vehicle-to-Everything (V2X) [2,3,4]

  • We have investigated the performance of a Radio Access Network (RAN) slicing strategy for splitting the radio resources into multiple RAN slices to support V2X and enhanced mobile broadband (eMBB) services in uplink, downlink and sidelink communications

Read more

Summary

Introduction

The new fifth generation (5G) of mobile networks will support a wide variety of services and applications over a shared network infrastructure [1]. 5G capabilities need to adapt to the wide and dynamic variations of 5G requirements for a particular situation, as the 5G capabilities required to provide services depend on each particular use case. The above works have proposed different approaches for RAN slicing, none of them has dealt with scenarios including slices for supporting Vehicle-toVehicle (V2V) communications, which constitute the focus of this paper In this respect, in our previous work [22], we proposed a novel strategy based on offline Qlearning and softmax decision-making to determine the adequate split of resources between the different slices while accounting for their utility requirements and the dynamic changes in the traffic load. We extend our previous works [22, 23] by investigating the RAN slicing strategy under different algorithm configurations (i.e., number of actions of RL) and different algorithm parameters in order to demonstrate its capability to perform an efficient allocation of resources among slices in terms of network metrics such as resource utilization, latency, network traffic load, achievable throughput, and to analyze the impact on algorithm-related metrics such as convergence time.

System Model
Problem Formulation for RAN Slicing
Reinforcement Learning-based RAN Slicing Solution
Reward Computation
Q-learning and low complexity heuristic algorithm
While learning period is active do
Simulation Setup
Impact of the number of Actions on the performance
Network Performance Metrics
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call