Abstract

With Mobile Edge Computing (MEC), computational intensive applications can be offloaded to the nearby edge servers to support latency-sensitive applications on mobile devices. Different from the cloud, edge servers usually have limited resources, and then selecting which edge server to run the offloaded computation becomes an important issue. Although server selection has received considerable attention, not much work has been done to consider the limited coverage of the edge server and the frequent user movement, which introduce many dynamic changing factors affecting the workload of the edge server and making it hard to achieve long-term optimum in the edge server selection. To deal with these challenges, we model the problem of continuous server selection as a Markov Decision Process (MDP). The difficulty of this problem is that achieving long-term optimum requires future knowledge, such as user mobility, server workload, etc, which is not known a priori. We do not have such knowledge and thus cannot find the optimal policy through traditional methods. To address this problem, we propose a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Deep Reinforcement Learning (DRL)</i> based algorithm to learn the selection policy based on the observed performance of past server selections. Specifically, a Long Short-Term Memory (LSTM) based neural network is exploited to encode the historical information which helps infer future knowledge of the dynamically changing factors. Then the DRL model selects the optimal server automatically based on the extracted system states. Extensive trace-driven evaluations demonstrate that the proposed DRL-based algorithm has the lowest overall cost compared to existing solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call