Abstract

Prompted by the remarkable progress in both Internet of Things and 5G technology, mobile edge computing (MEC) system has been attracting more and more attention from industry to academia. In the MEC system, edge nodes (ENs) deployed at base stations offer computing resources to nearby resource-hungry mobile devices (MDs) to provide high-quality services for edge users. Most researches either study task offloading or resource allocation in MEC system, and few researches consider optimizing both at the same time. In this paper, we propose a MEC system to minimize MDs’ expected long-term delay and energy costs by offloading tasks to the nearby ENs and allocating resources to the tasks. In view of the difference between the two decision problems, we develop an innovative solution, which includes two effective deep reinforcement learning (DRL) algorithms that solve the two problems respectively. We improve the actor-critic (A2C) algorithm to solve the offloading problem effectively, and the resource allocation problem is solved by the deep deterministic policy gradient (DDPG) algorithm. We train the two algorithms alternately. Through this well-designed solution, the problem of task offloading and resource allocation can be solved well. Experimental results show that the performance of A2C-DDPG is better than existing task offloading and resource allocation schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call