Abstract

We consider a resource allocation and offloading decision-making problem in a mobile edge computing (MEC) network. Since the locations of user equipments (UEs) vary over time in practice, we consider a dynamic network, where the UEs could leave or join the network coverage at any location. Since the joint offloading decision that minimizes the network cost also varies with the topology, the expected best offloading decision for the previous topology would not match the new topology. Consequently, the system suffers from recurring cost peaks due to the topology change. Thus, we propose a robust distributed hierarchical online learning approach to enhance the algorithm’s robustness and reduce the cost peaks. Specifically, the UEs learn the utility of each offloading decision via deep Q networks (DQNs) from their interaction with the MEC network. Meanwhile, the computational access points (CAPs) train their deep neural networks (DNNs) online with the real-time data collected from the UEs to predict their corresponding Q-value vectors. Therefore, the UEs and CAPs form a hierarchical collaborative-learning structure. When the topology changes, each UE downloads its Q-value vector as the Q-bias vector and learns its difference from the actual Q-value vector via its DQN. With different agents learning distributedly, both the peak and sum costs are reduced as the joint offloading decision could start from a near-local-optimal point. In simulations, our robust approach successfully reduces the peak cost and sum cost by up to 50% and 30%, respectively. This demonstrates the need for a robust learning algorithm design in a practical dynamic MEC network.

Highlights

  • We considered a dynamic mobile edge computing (MEC) network where the locations and the user equipments (UEs)’ total number both changed dynamically

  • Since the topology varies over time, the learned offloading strategy would not match the new realization

  • We proposed an online hierarchical learning algorithm, where the computational access points (CAPs) trained their deep neural networks (DNNs) with real-time data collected from the UEs to predict the Q-bias vectors

Read more

Summary

INTRODUCTION

The evolution of the Internet of Things (IoT) facilitates the development of new applications while bringing new chal-. Some works considered resource allocation problems in MEC networks. Few studies have addressed distributed algorithm design for the resource allocation and decision-making problems in multi-server multi-user dynamic MEC networks to the best of our knowledge. In our previous work [25], we formulated an expected sum cost minimization problem in a multi-user-multi-server MEC network. We extend our previous study and consider a dynamic MEC network, where the UEs are allowed to join or leave at any location within the network coverage In such a dynamic network, the UEs’ total number and distribution both change over time. We formulate a resource allocation problem in a multiUE multi-CAP dynamic MEC network to decide the transmission power and computational resource for a given joint offloading decision. I (·) is the indicator function whose value takes 1 when the statement in the parentheses is true, or 0 otherwise

SYSTEM MODEL
Local Computing
Edge Computing
PROBLEM FORMULATION
The Resource Allocation Problem
The Decision-Making Problem
THE ROBUST HIERARCHICAL LEARNING METHOD
The Resource Allocation Algorithm at the CAPs
The Decision-Making Algorithm at the UEs
The Hierarchical Learning Algorithm
SIMULATION RESULT
Computation Time Analysis
Nodes Time
Performance Analysis
The Performance for Different N
The Performance for Different αD and αE
The Impact of Nmax
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call