Abstract

Mobile edge computing (MobEC) builds an Information Technology (IT) service environment to enable cloud-computing capabilities at the edge of mobile networks. To tackle the restrictions in the battery power and computation capability of mobile devices, task offloading for using MobEC is developed and used to reduce the service latency and to ensure high service efficiency. However, most of the existing schemes only focus on one-shot offloading, while taking less into consideration the task dependency. It is urgently needed a more comprehensive and adaptive way to take both the energy constraint and the inherent dependency of tasks into account, since modern communication networks have increasingly become complicated and dynamic. To this end, in this paper, we are motivated to study the problem of dependency-aware task offloading decision in MobEC, aiming at minimizing the execution time for mobile applications with constraints on energy consumption. To solve this problem, we propose a model-free approach based on reinforcement learning (RL), i.e., a Q-learning approach that adaptively learns to optimize the offloading decision and energy consumption jointly by interacting with the network environment. Simulation results show that our RL-based approach is able to achieve significant reduction on the total execution time with comparably less energy consumption.

Highlights

  • With the proliferation of smart mobile devices, a multitude of mobile applications are emerging and gaining popularity, such as location-based virtual/augmented reality and online gaming [1]

  • The tension between computation-intensive applications and resource-constrained mobile devices creates a bottleneck for gaining satisfactory upon Quality-of-Service (QoS) and QoE [6], it drives a revolution in computing infrastructure

  • We focus on the Mobile edge computing (MobEC) scenario depicted in Fig. 1, where we have a set of mobile devices, one of which is labeled as our target

Read more

Summary

INTRODUCTION

With the proliferation of smart mobile devices, a multitude of mobile applications are emerging and gaining popularity, such as location-based virtual/augmented reality and online gaming [1]. VOLUME 7, 2019 of task 1, leading the execution latency of the application to increase To this end, our efforts in this paper are paid to help the target mobile device make offloading decisions for the minimum application execution latency, in joint terms of the task dependency, the local status (e.g., the computation capability and the available battery power of the target mobile device), and the task queueing in the edge server. Our Q-learning approach is built with the state of the network environment (defined by the task queue and the battery power of the target mobile device as well as the task queue in the edge server), the action of the target mobile device (corresponding to its offloading decision), and the feedback reward (indicated by the execution latency of each task).

RELATED WORK
TASK SCHEDULING AND RESOURCE ALLOCATION
ENERGY CONSTRAINT OF THE MOBILE DEVICE
REINFORCEMENT LEARNING BASED COMPUTATION
SIMULATION AND EVALUATION
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.