Abstract

Network slicing in fifth generation (5G) radio access network (RAN) enables serving massive network traffic with diverse and stringent quality of service (QoS) requirements. Multiple logical networks can be built using RAN slicing over a single RAN infrastructure. This paper considers three different types of slices that are standardized by third generation partner project (3GPP): ultra-reliable low-latency communication (URLLC), enhanced mobile broadband (eMBB), and massive machine type communication (mMTC). Each slice type has unique requirements for data rate, latency, and reliability. Cloud RAN (C-RAN) was proposed to address the requirements of 5G services, which involves physical separation between the remote radio head (RRH) and baseband unit (BBU) in 2-layers. While C-RAN enables more efficient resource utilization and energy consumption, it limits network scalability. Furthermore, C-RAN is unable to meet the stringent latency demands of URLLC services, as well as the massive fronthaul capacity required for eMBB requests. To address this issue, we propose a novel cloud fog RAN (CF-RAN) over wavelength division multiplexing (WDM) architecture. In CF-RAN over WDM the RAN functions are divided into 3 layers, which include the RRH at layer 1, fog nodes at layer 2, and BBU hotels at layer 3. We employ the emerging fog computing paradigm in optical and wireless networks in this suggested architecture. To facilitate low latency URLLC requests, the fog nodes are located closer to the cell site (CS). We propose an integer linear programming (ILP)-based mathematical model. The model's objective is to decrease the number of active BBU hotels and fog nodes while remaining compliant with practical network restrictions. Furthermore, a low complexity greedy heuristic algorithm is proposed to solve the problem and is compared to the branch & bound (B&B) algorithm, which is assumed to provide an optimal solution with exponential complexity. The proposed 3-layer CF-RAN over WDM architecture achieves a 70% improvement in BBU centralization and a 50% reduction in request blocking when compared to the standard 2-layer architecture.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call