Abstract

Mobile edge computing (MEC) has been envisioned as a promising paradigm that could effectively enhance the computational capacity of wireless user devices (WUDs) and quality of experience of mobile applications. One of the most crucial issues of MEC is computation offloading, which decides how to offload WUDs’ tasks to edge severs for further intensive computation. Conventional mathematical programming-based offloading approaches could face troubles in dynamic MEC environments due to the time-varying channel conditions (caused primarily by WUD mobility). To address the problem, reinforcement learning (RL) based offloading approaches have been proposed, which develop offloading policies by mapping MEC states to offloading actions. However, these approaches could fail to converge in large-scale MEC due to the exponentially-growing state and action spaces. In this article, we propose a novel online computation offloading approach that could effectively reduce task latency and energy consumption in dynamic MEC with large-scale WUDs. First, a RL-based computation offloading and energy transmission algorithm is proposed to accelerate the learning process. Then, a joint optimization method is adopted to develop the allocating algorithm, which obtains near-optimal solutions for energy and computation resources allocation. Simulation results show that the proposed approach can converge efficiently and achieve significant performance improvements over baseline approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call