Abstract

The ubiquitous diffusion of cloud computing requires suitable management policies to face the workload while guaranteeing quality constraints and mitigating costs. The typical trade-off is between the used power and the adherence to a service-level metric subscribed by customers. To this aim, a possible idea is to use an optimization-based placement mechanism to select the servers where to deploy virtual machines. Unfortunately, high packing factors could lead to performance and security issues, e.g., virtual machines can compete for hardware resources or collude to leak data. Therefore, we introduce a multi-objective approach to compute optimal placement strategies considering different goals, such as the impact of hardware outages, the power required by the datacenter, and the performance perceived by users. Placement strategies are found by using a deep reinforcement learning framework to select the best placement heuristic for each virtual machine composing the workload. Results indicate that our method outperforms bin packing heuristics widely used in the literature when considering either synthetic or real workloads.

Highlights

  • The cloud paradigm was originally introduced to access computing resources on demand, and nowadays it has become the foundation of different services

  • In this paper we propose a mechanism for virtual machines (VMs) placement based on deep reinforcement learning (DRL) (Arulkumaran et al 2017)

  • We consider how a variation of the quality of experience (QoE) impacts on the power required by the datacenter and vice versa by introducing the “density” of quality with respect to the required power, as follows: where Pmax is the power required by a physical machines (PMs) when N VMs are deployed and nit :=

Read more

Summary

Introduction

The cloud paradigm was originally introduced to access computing resources on demand, and nowadays it has become the foundation of different services. To pursue the vision of efficient cloud datacenters, the preferred solution aims at finding the best mapping of VMs over servers or other computing resources, referred to as physical machines (PMs) in the following, according to some performance criteria. As detailed in Zhang et al (2018), live migration of VMs poses various technological and performance challenges, and it requires non-negligible network bandwidth and computing resources For this reason, another approach called placement has been developed to prevent inefficient allocations when VMs are firstly created on PMs to fulfill requests from users (Usmani and Singh 2016). Machine learning can be used either to design new VM placement approaches or to enhance the capabilities of existing heuristics Toward this end, in this paper we propose a mechanism for VM placement based on deep reinforcement learning (DRL) (Arulkumaran et al 2017).

Related work
Definition of the VM placement problem
The sequence of optimization problems
Goals of the placement procedure
Multi-objective placement using deep reinforcement learning
Rainbow DQN
Numerical results
Training of the DRL-VMP approach
Performance metrics
Simulation results
Statistical analysis
Conclusions
Findings
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call