Abstract

It is widely acknowledged that network slicing can tackle the diverse use cases and connectivity services of the forthcoming next-generation mobile networks (5G). Resource scheduling is of vital importance for improving resource-multiplexing gain among slices while meeting specific service requirements for radio access network (RAN) slicing. Unfortunately, due to the performance isolation, diversified service requirements, and network dynamics (including user mobility and channel states), resource scheduling in RAN slicing is very challenging. In this paper, we propose an intelligent resource scheduling strategy (iRSS) for 5G RAN slicing. The main idea of an iRSS is to exploit a collaborative learning framework that consists of deep learning (DL) in conjunction with reinforcement learning (RL). Specifically, DL is used to perform large time-scale resource allocation, whereas RL is used to perform online resource scheduling for tackling small time-scale network dynamics, including inaccurate prediction and unexpected network states. Depending on the amount of available historical traffic data, an iRSS can flexibly adjust the significance between the prediction and online decision modules for assisting RAN in making resource scheduling decisions. Numerical results show that the convergence of an iRSS satisfies online resource scheduling requirements and can significantly improve resource utilization while guaranteeing performance isolation between slices, compared with other benchmark algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call