Abstract

Fog computing emerged as a new paradigm enabling the deployment of new Internet-of-Things (IoT) applications. Fog infrastructure is composed of heterogeneous nodes characterized by a complex distribution, mobility, and sporadic resource availability. Hence, resource coordination for continuous quality-of-service (QoS) satisfaction becomes challenging, and accurate resource tracking is needed for flawless servicing. In this context, we investigate and propose online resource allocation solutions. The main objective is to maximize the number of satisfied users within a predefined latency requirement. Hence, we model the fog computing environment as a Markov Decision Process, and then, we formulate the optimization problem. Due to the problem’s NP-hardness, we leverage the reinforcement learning (RL) tool to develop resource allocation schemes. First, a centralized method where a smart fog controller possesses a global awareness of the fog computing environment is proposed. Next, a more practical and collaborative solution is presented, where each RL-enabled agent manages a group of fog nodes and their resources in order to satisfy computing requests. Based on real-world mobility datasets, simulation results illustrate the high efficiency of the proposed solutions with a preference for the collaborative approach. The superiority of our proposed solutions over state-of-the-art methods is also illustrated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call