Abstract

Cloud service providers use load balancing algorithms in order to avoid Service Level Agreement Violations (SLAVs) and wasted energy consumption due to host over- and under-utilization, respectively. Load balancing algorithms migrate VMs between hosts in order to balance host loads. Any Virtual Machines (VMs) that are migrated experience performance degradation which results in lower Quality of Service (QoS) and can possibly result in SLAVs. Hence, an optimal load balancing method should reduce the number of over- and under-utilized hosts with a minimal number of VM migrations. One of the metrics used previously in the literature for evaluating load balancing stated that it equally considered SLAVs caused by both over-utilized hosts and migrations. However, in this paper, we show that, in fact, this metric values keeping the number of migrations low at the expense of an increased number of over-utilized hosts. This disparity is demonstrated by simulation of Google, PlanetLab and Azure data sets in CloudSim. This metric may suit public cloud providers which are focused on minimizing SLAVs and keeping energy costs low, but does not consider the QoS of customer VMs. We propose an alternative metric that considers QoS for the VMs. This alternative metric considers not only performance loss during migration, but also performance degradation due to host over-utilization. Private cloud providers, e.g., IT services within large organizations, often value the performance of their “customer” VMs, i.e., the QoS their organization receives, as well as traditional cloud provider costs, i.e., energy and SLAV costs. Hence, our alternative metric would be more appropriate in these scenarios. We compare and contrast load balancing methods using both the existing, biased metric and our new alternative metric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call