Abstract

In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability, local optimal solutions, and parameter tuning difficulties. To overcome these defects, we propose a novel edge server placement algorithm based on deep q-network and reinforcement learning, dubbed DQN-ESPA, which can achieve optimal placements without relying on previous placement experience. In DQN-ESPA, the edge server placement problem is modeled as a Markov decision process, which is formalized with the state space, action space and reward function, and it is subsequently solved using a reinforcement learning algorithm. Experimental results using real datasets from Shanghai Telecom show that DQN-ESPA outperforms state-of-the-art algorithms such as simulated annealing placement algorithm (SAPA), Top-K placement algorithm (TKPA), K-Means placement algorithm (KMPA), and random placement algorithm (RPA). In particular, with a comprehensive consideration of access delay and workload balance, DQN-ESPA achieves up to 13.40% and 15.54% better placement performance for 100 and 300 edge servers respectively.

Highlights

  • SkrobekMobile cloud computing is the combination of cloud computing and mobile computing to bring rich computational resources to end mobile users, network operators, and cloud computing providers

  • The access delay and workload balance of deep q-network (DQN)-ESPA are shown in Figure 4a,b when placing 100 edge servers

  • The results show that the DQN-ESPA is able to obtain both the lowest average delay and the minimum workload standard deviation

Read more

Summary

Introduction

Mobile cloud computing is the combination of cloud computing and mobile computing to bring rich computational resources to end mobile users, network operators, and cloud computing providers. Different from the above methods, this paper proposes a novel edge server placement algorithm based on deep reinforcement learning, dubbed DQN-ESPA. The ESPP is modeled as a Markov decision process (MDP), where the goal is to balance edge server workloads and minimize the access delay between the mobile user and edge server. A new solution model is proposed for the ESPP based on MDP and reinforcement learning. In this model, location sequences of edge servers are modeled as states, decisions of move direction of edge servers are modeled as actions, and the negative access delay and standard deviation of workloads are modeled as rewards. An edge server placement algorithm based on deep q-network (DQN), named DQNESPA, is proposed to solve the ESPP by combining a deep neural network and a.

Related Work
System Model
MEC Model
Problem Description
Algorithm Design
MDP Model
Action
Reward
DQN-ESPA
Performance Evaluation
Configuration of the Experiments
Dataset Description
Experimental Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call