Abstract

In 5G-and-beyond networks, the concept of Network Slicing is used to support multiple independent and co-existing logical networks on a physical network infrastructure. The infrastructure provider (InP) owns the set of virtual and physical resources that are used to support the tenant slice requests. Each slice request specifies a service level agreement (SLA) that contains the required slice-level resources (computation and communication) and the revenue provided by the tenant. Due to limited resources, the InP cannot accommodate all requests made by the tenants. In general, it has been found that tenants tend to overestimate their resource demands (e.g., for 5G Core computation) to reduce possible SLA violations. In this paper, we consider two major slice types: Elastic (low priority and low revenue) and Inelastic (high priority and high revenue). We apply the concept of overbooking, where the InP accepts more slices while considering slice priorities in order to maximize the overall revenue and utilization. We consider a multi-tenant environment and propose a slice admission system named PRLOV, which is based on Reinforcement Learning (RL) and prediction methods. The system predicts the future resource demands of Elastic slices and applies an opportunistic overbooking technique to overbook InP for accepting more slices. In addition, the admission decision is formulated as an Markov Decision Process (MDP) problem and solved using standard RL techniques (Policy Iteration, Q-Learning, DQN). The performance of our proposed work is compared against three other heuristics (Basic, Prediction, PRL) that do not use overbooking. Data traces from the Materna data center networks were used for prediction purposes. The important performance metrics measures include InP total revenue, the acceptance rate of respective slices and overall resource utilization for different slices. The results show that the proposed work significantly outperforms the other mechanisms in terms of revenue gain and resource utilization. Simulation results show that PRLOV provides a revenue gain of 6%, 26%, and 102% compared to the PRL, Prediction and Basic scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call