Abstract

To cope with the growing and diversifying 5G services, RAN slicing, an effective resource allocation mechanism, has been proposed. Each RAN slice serves varied service requirements, with baseband processing functions (BPFs), e.g., distributed units (DUs) and centralized units (CUs), implemented via virtual machines in a processing pool (PP). Co-locating the virtualized DU/CU (vDU/vCU) of multiple slices in a single PP enhances resource utilization and reduces power consumption. As mobile traffic and slice resource demands fluctuate over time, we face a trade-off: either migrate RAN slices to improve resource efficiency or avoid migration to prevent user service interruption, thereby ensuring users’ QoS. Additionally, an elastic optical network (EON) is employed as the substrate metro aggregation network for flexible and spectrum-efficient scheduling. In this context, the routing and spectrum allocation of optical paths connecting different BPFs should also be optimized to maximize spectral resource usage. To address the above RAN slice deployment and migration issue, in this paper, we propose a heuristic-assisted deep reinforcement learning (HA-DRL)-based algorithm to jointly optimize power consumption, slice migration, and spectrum resource consumption. Two heuristic algorithms, RAN slice reallocation (RSR) and RAN slice adjustment (RSA), are proposed. Using their results as a reference, the HA-DRL achieves a better trade-off among the triple optimization objectives. Simulations on a small-scale 9-node network and a large-scale 30-node network demonstrate the superiority of HA-DRL over baseline heuristic algorithms. We achieved significant reductions in migrated traffic and spectral resource saving at a minor power consumption cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call