Abstract

AbstractMobile edge computing has been provisioned as a promising paradigm to provide User Equipment (UE) with powerful computing capabilities while maintaining low task latency, by offloading computing tasks from UE to the Edge Server (ES) deployed in the edge of networks. Due to ESs’ limited computing resources, dynamic network conditions and various UEs task requirements, computation offloading should be carefully designed, so that satisfactory task performances and low UE energy consumption can both be achieved. Since the scheduling objective function and constraints are typically non-linear, the scheduling of computation offloading is generally NP-hard and difficult to obtain optimal solutions. To address the issue, this paper combines deep learning and reinforcement learning, namely deep reinforcement learning, to approximate the computation offloading policy using neural networks and without the need of labeling data. In addition, we integrate Multi-agent Deep Deterministic Policy Gradient (MADDPG) with the federated learning algorithm to improve the generalization performance of the trained neural network model. According to our simulation results, the proposed approach can converge within 10 thousand steps, which is equivalent to the method based on MADDPG. In addition, the proposed approach can obtain lower cost and better QoS performance than the approach based only on MADDPG.KeywordsDeep reinforcement learningMulti-agent deep deterministic policy gradientFederated learningMobile edge computing

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call