Abstract

The number of Internet of Things (IoT)-based applications is constantly increasing, and transferring all their associated data to a remote centralized cloud requires more latency, energy, bandwidth, and cost. In such cases, the fog layer as a new computing paradigm supports resource-constrained IoT devices. Fog computing equipment at the network edge can allocate its resources to process real-time IoT applications. IoT application placement mechanisms in the fog environment have been developed to address these issues. According to microservice architecture, subordinate services of IoT applications can be independently deployed on fog servers. Hence, optimal utilization of fog resources is of great importance to satisfy the Quality of Service (QoS) and requires a distributed and autonomous mechanism to solve the Service Placement Problem (SPP) in fog. Motivated by the generalizability shortcomings of existing approaches, we use the Asynchronous Advantage Actor-Critic (A3C) algorithm as a new Deep Reinforcement Learning (DRL) approach to solve SPP. The proposed scheme focuses on the placement of IoT services with the objectives of minimizing cost and latency under deadline and resource constraints. According to these objectives, A3C seeks to maximize the long-term cumulative reward for improving QoS. We perform placement on local fog domains and use neighboring fog domains when needed to improve fog utilization. In addition, a resource distribution extraction technique over time is considered to save more resources to handle future requests. The simulation results show that our mechanism significantly improves the cost and latency compared to its counterparts such as DDQL and IMPALA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call