Abstract

With the rapid development of wireless communication technologies, the emerging multimedia applications make mobile Internet traffic grow explosively while putting forward higher service requirements for the next-generation wireless networks. Therefore, how to achieve low-latency content transmission by effectively allocating heterogeneous network resources to improve the network quality of service and end-user quality of experience is a key issue to be solved urgently in the current Internet. In this article, we propose a deep reinforcement learning (DRL)-based resource allocation scheme to improve content distribution in a layered fog radio access network (FRAN). We formulate the optimal resource allocation problem as a minimal delay model, where in-network caching is deployed and the same content requests from mobile users can be aggregated in the queue of each base station. To cope with the increasing user requests and overcome capacity constraints of the FRAN, moreover, a cloud–edge cooperation offloading scheme is utilized in our model, where the integrated allocation of caching, computing, and communication resources and joint optimization between in-network caching and routing are considered to promote resource utilization and content delivery. In our solution, a new DRL policy is designed to make cross-layer cooperative caching and routing decisions for the arriving content requests according to request history information and available network resources in the system. Simulation results demonstrate that our proposed model can performs much better than the existing cloud–edge cooperation schemes in the FRAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call