Abstract

Centralized radio access network (C-RAN) is a network architecture that is emerging as a key technology enabler for 5G mobile networks as capacity demands for mobile traffic continue to proliferate. Essentially, C-RAN involves separating the remote radio heads from baseband units to be processed in the cloud. A systematic design of C-RAN involves mapping of individual baseband signal processing tasks to general purpose cloud infrastructures, such as the microserver to reduce the energy footprint. In this paper, we start with mapping the lowest protocol stack, i.e., the physical layer (PHY), which is characterized by strict latency and dynamic data rate. To achieve this, we explore the use of machine intelligence for energy-efficient mapping of PHY signal processing on microservers. Fundamental to this approach is: 1) the use of principal component analysis to represent workload from multi-dimensional hardware performance statistics, demonstrating 99.88% correlation with the critical PHY processing latency and 2) the use of deep learning to model latency and predict dynamic workload for on-demand resource allocation, resulting in up to 36% reduction in hardware usage. These principles are built into a cross-layer run-time framework, which adapts resource allocation in response to time-varying data rate, guaranteeing latency, and improving energy efficiency by up to 48% (average 28%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call