Abstract

Mobile edge computing (MEC) as a promising technology to relieve edge user equipment (UE) computing pressure by offloading part of a task, is able to reduce the execution delay and energy consumption effectively, and improve the quality of computation experience for mobile users. Nevertheless, we are facing a challenge of design of computation offloading and resource allocation strategy on a part of a task offloaded to MEC server. A task is divided into two sub-tasks firstly. Then one of the two sub-tasks is executed locally, and the other will be offloaded to MEC server that is located near the base station (BS). Based on dynamic offloading and resource allocation strategy, the best offloading proportion of a task, local calculation power and transmission power are investigated by deep reinforcement learning (DRL). In this paper, we propose two DRL-based approaches, which are named as deep Q network (DQN) and deep deterministic policy gradient (DDPG), to minimize the weighted sum cost including execution delay and energy consumption of UE. DQN and DDPG can deal with large scale state spaces and learn efficient offloading proportion of task and power allocation independently at each UE. Simulation results demonstrate that each UE can learn the effective execution policies, and the proposed schemes achieve a significant reduction in the sum cost of task compared with other traditional baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call