The deployment of microservices in Hybrid Fog/Cloud (HFC) environments for Internet of Things (IoT) applications presents a significant challenge in efficiently scheduling containerized services across distributed resources. While existing studies have explored microservice scheduling, a comprehensive approach that considers resource constraints, workflow dependencies, and dynamic hybrid environments remains elusive. This paper introduces a novel Deep Reinforcement Learning-based Algorithm (DRLA) for containerized microservice scheduling in HFC environments. DRLA utilizes a multi-constrained Binary Quadratic Program (BQP) model to optimize execution time, resource consumption, and occupancy rates while considering microservice dependencies and resource capabilities. The algorithm leverages two Deep Reinforcement Learning (DRL) agents, DQN and REINFORCE, to learn and adapt to the dynamic nature of the HFC federation. Experimental evaluations using five real-world Business Process (BP) use cases demonstrate that DRLA outperforms existing scheduling approaches such as default Kubernetes, Reward Sharing Deep Q-Learning (RSDQL), and Deep Reinforcement Learning (DRL) schedulers. Compared to existing schedulers, ours delivers optimal or near-optimal solutions, demonstrating significant improvements in key performance values. Indeed, DRLA achieves average optimality gaps of just 0.16% and 0.21%, significantly outperforming the Kubernetes scheduler, which exhibits a gap of 2.88%. It is worth noting that other similar algorithms see a far higher increase in optimality gaps. This highlights DRLA’s excellent performance and ability to schedule containerized microservices in hybrid fog/cloud environments, resulting in near-optimal solutions for a variety of use cases.
Read full abstract