Abstract

The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the traditional radio access network (RAN), introducing Open RAN (O-RAN). This new paradigm is based on a virtualized and intelligent RAN architecture. However, with the increased complexity of 5G applications, traditional application-specific placement techniques have reached a bottleneck. Our paper presents a Transfer Learning (TL) augmented Reinforcement Learning (RL) based networking slicing (NS) solution targeting more effective placement and improving downtime for prolonged slice deployments. To achieve this, we propose an approach based on creating a robust and dynamic repository of specialized RL agents and network slices geared towards popular user service types such as eMBB, URLLC, and mMTC. The proposed solution consists of a heuristic-controlled two-module-based ML Engine and a repository. The objective function is formulated to minimize the downtime incurred by the VNFs hosted on the commercial-off-the-shelf (COTS) servers. The performance of the proposed system is evaluated compared to traditional approaches using industry-standard 5G traffic datasets. The evaluation results show that the proposed solution consistently achieves lower downtime than the traditional algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.