Abstract

Edge computing has been envisioned as a key enabler to provide computation-intensive and delay-sensitive services in the future Internet of Things systems. By offloading the computational tasks to the edge server, both the service latency and energy consumption can be reduced. Since devices may request various types of computing services, caching appropriate services in the edge server to immediately provide computing resources can improve the quality of service. Nevertheless, it brings new challenges to jointly optimize the resource allocation, where the timeliness of caching and offloading operations are different. In this article, we first formulate the collaborative service caching and computation offloading as a dual-timescale resource allocation problem to minimize the costs of latency and energy consumption. Under this framework, a novel scheme based on hierarchical deep reinforcement learning is proposed to output collaborative caching and computing actions. Specifically, the proposed approach contains the service caching policy and the device computing policy with hierarchical action–value functions, which allows a flexible configuration of caching timescales. The simulation results demonstrate that the proposed policy outperforms the existing schemes on convergence performance and various parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call