Users have increasingly been having more use cases for the network while expecting the best Quality of Service (QoS) and Quality of Experience (QoE). The Fifth Generation of mobile telecommunications technology (5G) network had promised to satisfy most of the expectations and network slicing had been introduced in 5G to be able to satisfy various use cases. However, creating slices in a real-life environment with just the resources required while having optimized QoS has been a challenge. This has necessitated more intelligence to be required in the network and machine learning (ML) has been used recently to add the intelligence and ensure zero-touch automation. This research addresses the open question of creating slices to satisfy various use cases based on their QoS requirements, managing, and orchestrating them optimally with minimal resources while allowing the isolation of services by introducing a Deep reinforcement Machine Learning (DRL) algorithm. This research first evaluates the previous work done in improving QoS in the 5G core. 5G architecture is simulated by following the ETSI NFV MANO (European Telecommunications Standards Institute for Network Function Virtualization Management and Orchestration) framework and uses Open5G in 5G core, UERANISM for RAN, Openstack for Virtual Infrastructure Manager (VIM), and Tacker for Virtual Network Function Management and orchestration (VNFMO). The research simulates network slicing at the User Plane Function (UPF) level and evaluates how it has improved QoS. The network slicing function is automated by following ETSI Closed Loop Architecture and using Deep Reinforcement Learning (DRL) by modeling the problem as a Markov Decision Problem (MDP). Throughput is the Reward for the actions of the DRL agent. Comparison is done on the impact of slicing on throughput and compares models that have not been sliced, the ones that have been sliced and combined to work together, and models with slices that have been assigned more bandwidth. Sliced networks have better throughput than the ones not sliced. If more slices are load-balanced the throughput is increased. Deep Reinforcement Learning has managed to achieve the dynamic assigning of slices to compensate for declining throughput.
Read full abstract