Experimentation in real cloud environments for training Deep Reinforcement Learning (DRL) agents can be costly, time-consuming, and non-repeatable. To overcome these limitations, simulation-based approaches are promising alternatives. This paper introduces a specialized simulation environment that integrates OpenAI Gym, a popular platform for reinforcement learning, with CloudSim Plus, a versatile cloud simulation framework. The proposed simulator specifically focuses on the case study of energy-driven cloud scaling. By leveraging the strengths of both Python-based OpenAI Gym and Java-based CloudSim Plus, the simulation environment offers a flexible and extensible platform for DRL-Agent training. The integration is facilitated through a gateway that enables seamless interaction between the two frameworks. The simulation environment is designed to support the training process of DRL agents, enabling them to tackle the complexities of cloud scaling in an energy-aware context. It provides configurable settings that represent various cloud scaling scenarios, allowing researchers to explore different parameter configurations and evaluate the performance of DRL agents effectively. Through extensive experimentation, the proposed simulation environment demonstrates its functionality and applicability in measuring the performance of DRL agents with respect to energy-driven cloud scaling. The results obtained from the case study validate the effectiveness and potential of the simulation environment for training DRL agents in cloud scaling scenarios. Overall, this work presents a novel simulation environment that bridges the gap between DRL-Agent training and cloud scaling challenges, offering researchers a valuable tool for advancing the field of energy-driven cloud scaling through reinforcement learning.
Read full abstract