Abstract

Complex dynamic services and heterogeneous network environments make the asymmetrical control a curial issue to handle on the Internet. With the advent of the Internet of Things (IoT) and the fifth generation (5G), the emerging network applications lead to the explosive growth of mobile traffic while bringing forward more challenging service requirements to future radio access networks. Therefore, how to effectively allocate limited heterogeneous network resources to improve content delivery for massive application services to ensure network quality of service (QoS) becomes particularly urgent in heterogeneous network environments. To cope with the explosive mobile traffic caused by emerging Internet services, this paper designs an intelligent optimization strategy based on deep reinforcement learning (DRL) for resource allocation in heterogeneous cloud-edge-end collaboration environments. Meanwhile, the asymmetrical control problem caused by complex dynamic services and heterogeneous network environments is discussed and overcome by distributed cooperation among cloud-edge-end nodes in the system. Specifically, the multi-layer heterogeneous resource allocation problem is formulated as a maximal traffic offloading model, where content caching and request aggregation mechanisms are utilized. A novel DRL policy is proposed to improve content distribution by making cache replacement and task scheduling for arriving content requests in accordance with the information about users’ history requests, in-network cache capacity, available link bandwidth and topology structure. The performance of our proposed solution and its similar counterparts are analyzed in different network conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call