Abstract

The proliferation of Internet of Things (IoT) data and innovative mobile services has promoted an increasing need for low-latency access to resources such as data and computing services. Mobile edge computing has become an effective computing paradigm to meet the requirement for low-latency access by placing resources and dispatching tasks at the edge clouds near mobile users. The key challenge of such solution is how to efficiently place resources and dispatch tasks in the edge clouds to meet the QoS of mobile users or maximize the platform's utility. In this paper, we study the joint optimization problem of resource placement and task dispatching in mobile edge clouds across multiple timescales under the dynamic status of edge servers. We first propose a two-stage iterative algorithm to solve the joint optimization problem in different timescales, which can handle the varieties among the dynamic of edge resources and/or tasks. We then propose a reinforcement learning (RL) based algorithm which leverages the learning capability of Deep Deterministic Policy Gradient (DDPG) technique to tackle the network variation and dynamic as well. The results from our trace-driven simulations demonstrate that both proposed approaches can effectively place resources and dispatching tasks across two timescales to maximize the total utility of all scheduled tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call