Abstract

Fog Computing emerged to support the growing demands of real-time Internet of Things (IoT) applications, which require high availability of distributed computing services. Intelligent workload distribution is needed to maximize the utilization of Fog resources while minimizing the time required to process these workloads. Such Load Balancing (LB) algorithms are critical in dynamic environments with heterogeneous resources and workload requirements along with unpredictable traffic demands. In this paper, LB is provided using Reinforcement Learning (RL), which optimizes the system performance by minimizing the waiting delay of IoT workloads. The novelty of the proposed approach is the privacy-aware state and reward representation tailored to provide efficient load distribution without requiring load and resource information from Fog nodes. Unlike existing RL-based LB approaches, our approach considers the privacy requirements of Fog service providers, who might like to hide such information to prevent competing providers from calculating, for example, better pricing strategies. Balancing the load without observing Fog load and resource information allows for dynamic adaptation to changes in Fog resources and the load they are handling. In addition, the state representation is carefully designed to enable the agent to dynamically adapt to changes in workload requirements and in the distribution of IoT devices. The proposed algorithm is interactively evaluated with Discrete-event Simulation (DES) to mimic practical deployment in realistic environments. In addition, the algorithm’s generalization ability is evaluated with longer simulations than what it was trained on, which has never been explored before to the best of our knowledge. The results provided in this paper show how the proposed approach improves the total execution delay over baseline methods by at least 82% and 87% using medium and high workload generation rates, respectively. We also compared our Privacy-Aware RL (PARL) agent with Privacy-Lacking RL (PLRL) agents from the literature. The results show how the PARL agent efficiently handles the added complexity of partial observability (to maintain privacy) while achieving the optimal performance of fully observable PLRL agents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.