Abstract

Recently, in order to distribute computing, networking resources, services, near terminals, mobile fog is gradually becoming the mobile edge computing (MEC) paradigm. In a mobile fog environment, the quality of service affected by offloading speeds and the fog processing, however the traditional fog method to solve the problem of computation resources allocation is difficult because of the complex network states distribution environment (that is, F-AP states, AP states, mobile device states and code block states). In this paper, to improve the fog resource provisioning performance of mobile devices, the learning-based mobile fog scheme with deep deterministic policy gradient (DDPG) algorithm is proposed. An offloading block pulsating discrete event system is modeled as a Markov Decision Processes (MDPs), which can realize the offloading computing without knowing the transition probabilities among different network states. Furthermore, the DDPG algorithm is used to solve the issue of state spaces explosion and learn an optimal offloading policy on distributed mobile fog computing. The simulation results show that our proposed scheme achieves 20%, 37%, 46% improvement on related performance compared with the policy gradient (PG), deterministic policy gradient (DPG) and actor–critic (AC) methods. Besides, compared with the traditional fog provisioning scheme, our scheme shows better cost performance of fog resource provisioning under different locations number and different task arrival rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call