Abstract

With the growing demand for latency-critical and computation-intensive Internet of Things (IoT) services, the IoT-oriented network architecture, mobile-edge computing (MEC), has emerged as a promising technique to reinforce the computation capability of the resource-constrained IoT devices. To exploit the cloud-like functions at the network edge, service caching has been implemented to reuse the computation task input/output data, thus effectively reducing the delay incurred by data retransmissions and repeated execution of the same task. In a multiuser cache-assisted MEC system, users’ preferences for different types of services, possibly dependent on their locations, play an important role in the joint design of communication, computation, and service caching. In this article, we consider multiple representative locations, where users at the same location share the same preference profile for a given set of services. Specifically, by exploiting the location-aware users’ preference profiles, we propose joint optimization of the binary cache placement, the edge computation resource, and the bandwidth (BW) allocation to minimize the expected sum-energy consumption, subject to the BW and the computation limitations as well as the service latency constraints. To effectively solve the mixed-integer nonconvex problem, we propose a deep learning (DL)-based offline cache placement scheme using a novel stochastic quantization-based discrete-action generation method. The proposed hybrid learning framework advocates both benefits from the model-free DL approach and the model-based optimization. The simulations verify that the proposed DL-based scheme saves roughly 33% and 6.69% of energy consumption compared with the greedy caching and the popular caching, respectively, while achieving up to 99.01% of the optimal performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call