Abstract

Multiaccess edge computing (MEC) is a new paradigm to meet the demand of resource-hungry and latency-sensitive services by enabling the placement of services and execution of computing tasks at the edge of radio access networks much closer to resource-constrained devices. However, how to serve more requests while reducing service latency by exploiting limited resources (storage capacities, CPU cycles, communication bandwidth) is still a critical issue in the multidevice MEC-assisted IoT networks, since the time-varying computing demands of devices and unavailability of future information make it difficult to determine where to handle computation tasks and which services to cache. In this article, we propose a twin-timescale framework to jointly optimize adaptive request scheduling (RS) and cooperative service caching (SC) in the multidevices and MEC-assisted networks, in order to explore request dynamic, MECs heterogeneity, service difference. To accommodate the unavailability of future information and unknown system dynamics, we, respectively, formulate RS and SC as partially observable Markov decision process (POMDP) problems. Then, we propose a deep reinforcement learning (DRL)-based online algorithm to improve the service latency reduction ratio and hit rate, which do not require <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">a priori</i> knowledge such as service popularity. Moreover, we give the optimal CPU cycles and communication bandwidth allocations in order to further minimize the average service latency. Extensive and trace-driven simulation results demonstrate the efficacy of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call