Abstract

Over the past years, computationally-intensive mobile applications, such as interactive games and augmented reality, have gained enormous popularity. This phenomenon has placed a serious burden on mobile devices with limited computation resources and constrained battery capacity. Multi-access Edge Computing (MEC) is proposed to solve the problem by offloading part of the computation tasks from mobile devices to edge servers. The fundamental challenge in MEC is how to effectively select a subset of computation tasks to be offloaded so that the application requirements are satisfied and the total energy consumption is minimized. The existing Deep Reinforcement Learning (DRL) based offloading schemes focus on either non-real-time tasks or real-time tasks with soft deadlines. In addition, the existing schemes do not work well when the information of the system environment is not complete. In this paper, we propose an innovative DRL-based task offloading method, PDMO, which guarantees that the deadlines of real-time tasks are met even when the system environment is only partially observable. Technically, the offloading problem is formulated as a Partially Observable Markov Decision Process (POMDP). To tackle the offloading problem, we devise a Deep Deterministic Policy Gradient (DDPG) based algorithm, POTD3. Our experimental results indicate that PDMO works well in partially observable environments. In addition, it outperforms the existing offloading schemes in terms of energy consumption, deadline miss number and completion rate of non-real-time tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call