Abstract

The rapid growth of cloud computing significantly boosts energy usage, driven mainly by CPU operations and cooling. While cloud scaling efficiently allocates resources for changing workloads, current energy-driven methods often prioritize energy metrics combined with throughput, execution time, or SLA compliance, neglecting cooling power’s influence on energy consumption. To bridge this gap, we propose a deep reinforcement learning (DRL)-based autoscaler that considers cooling power as a critical factor for decision-making. Our approach employs DRL to dynamically adjust cloud resources, aiming to maximize energy efficiency and meet performance objectives. DRL, unlike RL, uses neural networks to handle the extensive state–action space in cloud scaling, overcoming the challenge of limited memory capacity for storing Q-values. In this study, we evaluate the performance of our proposed solution through a simulation-based experiment. We compare the performance of the proposed DRL-based autoscalers against an RL-based autoscaler. Our findings indicate that the DDQN-based autoscaler consistently outperforms other algorithms by maintaining optimal Power Usage Effectiveness (PUE) levels and improving task execution speed during high workloads. In contrast, the DQN-based autoscaler excels at sustaining optimal PUE levels during lower task loads, with a faster convergence rate at a scaling factor of 2 compared to scaling factor 1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call