Abstract

The growing demands of the Internet of Things (IoT) require a platform that supports real-time interactions and high availability of services to devices. In this context, the fog computing paradigm has emerged as an attractive solution for processing the data of IoT applications. Owing to the unpredictable traffic demands and resource heterogeneity in the fog environment, a smart workload distribution is essential to achieve high resource utilization and computing efficiency. To this end, this paper considers a joint link and server load balancing problem with multiple cooperative access points, (APs), in a combined edge-fog-cloud environment. The joint optimization problem is formulated as a stochastic game, and an actor-critic reinforcement learning framework, called FLoadNet, is proposed to optimize the joint policy of the multi-agents. FLoadNet consists of a centralized critic network, with parameter sharing and distributed individual actor networks in all the APs. Due to the learning dynamics and partially observable environment, we propose an extended critic network model, where cooperative APs learn to communicate among themselves while evaluating the value function. Unlike previous studies, the proposed critic network is designed to train both value and message functions, which is shown to significantly reduce the computational cost. The main goal of this work is to advance the development of efficient edge learning and the application of distributed learning algorithms specifically to fog network load balancing. The experimental results show that FLoadNet outperforms baseline load balancing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call