Abstract

Radio access network (RAN) slicing is a key element in enabling current 5G networks and next-generation networks to meet the requirements of different services in various verticals. However, the heterogeneous nature of these services' requirements, along with the limited RAN resources, makes RAN slicing very complex. Indeed, the challenge that mobile virtual network operators (MVNOs) face is to rapidly adapt their RAN slicing strategies to the frequent changes of the environment constraints and service requirements. Machine learning techniques, such as deep reinforcement learning (DRL), are increasingly considered a key enabler for automating the management and orchestration of RAN slicing operations. Nerveless, the ability to generalize DRL models to multiple RAN slicing environments may be limited due to their strong dependence on the environment data on which they are trained. Federated learning enables MVNOs to leverage more diverse training inputs for DRL without the high cost of collecting this data from different RANs. In this article, we propose a federated deep reinforcement learning approach for Open RAN Slicing. In this approach, MVNOs collaborate to improve the performance of their DRL-based RAN slicing models. Each MVNO trains a DRL model and sends it for aggregation. The aggregated model is then sent back to each MVNO for immediate use and further training. The simulation results show the effectiveness of the proposed DRL approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call