MiyakoDori: A Memory Reusing Mechanism for Dynamic VM Consolidation
In Infrastructure-as-a-Service datacenters, the placement of Virtual Machines (VMs) on physical hosts are dynamically optimized in response to resource utilization of the hosts. However, existing live migration techniques, used to move VMs between hosts, need to involve large data transfer and prevents dynamic consolidation systems from optimizing VM placements efficiently. In this paper, we propose a technique called reusing'' that reduces the amount of transferred memory of live migration. When a VM migrates to another host, the memory image of the VM is kept in the source host. When the VM migrates back to the original host later, the kept memory image will be reused'', i.e. memory pages which are identical to the kept pages will not be transferred. We implemented a system named MiyakoDori that uses memory reusing in live migrations. Evaluations show that MiyakoDori significantly reduced the amount of transferred memory of live migrations and reduced 87% of unnecessary energy consumption when integrated with our dynamic VM consolidation system.
- Conference Article
27
- 10.1109/ipdps.2016.120
- May 1, 2016
A key attraction of virtual machines (VMs) is live migration - the ability to move their execution state across physical machines even as the VMs continue to run. Unfortunately, the traditional pre-copy and post-copy techniques are not agile in the face of resource pressures at the source host, since it takes a long time to transfer the memory state of a VM. Consequently, the performance suffers for all VMs - those being migrated as well as those being left behind. Prior works have attempted to optimize indirect measures of migration effectiveness such as downtime, total migration time, and network overhead. However, none have treated the performance of VMs impacted by migration as the primary metric of migration effectiveness. We propose an Agile live migration technique that quickly recovers the performance of all VMs under resource pressure by eliminating resource pressure faster than traditional live migration. The working set of a VM is typically much smaller than its full memory footprint. Our approach works by transparently tracking the working set of each VM and offloading the non-working set (cold pages) in advance to portable per-VM swap devices. We present a new hybrid pre/post-copy technique that reduces the performance impact on the VM's workload by transferring only the working set of the VM while enabling destination to remotely access cold pages from the per-VM swap device. We describe the challenges in the design and implementation of Agile live migration in the KVM/QEMU platform without modifying the guest OS in the VM. When live migrating under memory pressure, we demonstrate a reduction in the performance impact on VMs by a up to factor of 2, reduction in migration time by up to factor of 4 besides reduction in memory pressure on both the source and destination hosts.
- Dissertation
- 10.4225/03/58b911cef2af4
- Mar 3, 2017
Cloud Computing has recently emerged as a highly successful alternative information technology paradigm through on-demand resource provisioning and almost perfect reliability. In order to meet the customer demands, Cloud providers are deploying large-scale virtualized data centers consisting of thousands of servers across the world. These data centers require huge amount of electrical energy that incur very high operating cost and as a result, leave large carbon footprints. The reason behind the extremely high energy consumption is not just the amount of computing resources used, but also lies in inefficient use of these resources. Furthermore, with the recent proliferation of communication-intensive applications, network resource demands are becoming one of the key areas of performance bottleneck. As a consequence, efficient utilization of data center resources and minimization of energy consumption are emerging as critical factors for the success of Cloud Computing. This thesis addresses the above mentioned resource and energy related issues by tackling through data center-level resource management, in particular, by efficient Virtual Machine (VM) placement and consolidation strategies. The problem of high resource wastage and energy consumption is dealt with an online consolidated VM cluster placement scheme, utilizing the Ant Colony Optimization (ACO) metaheuristic and a vector algebra-based multi-dimensional resource utilization model. In addition, optimization of network resource utilization is addressed by an online network-aware VM cluster placement strategy in order to localize data traffic among communicating VMs and reduce traffic load in data center interconnects that, in turn, reduces communication overhead in the upper layer network switches. Besides the online placement schemes that optimize the VM placement during the initial VM deployment phase, an offline decentralized dynamic VM consolidation framework and an associated algorithm leveraging VM live migration technique are presented to further optimize the run-time resource usage and energy consumption, along with migration overhead minimization. Such migration-aware dynamic VM consolidation strategy uses realistic VM migration parameters to estimate impacts of necessary VM migrations on data center and hosted applications. Simulation-based performance evaluation using representative workloads demonstrates that the proposed VM placement and consolidation strategies are capable of outperforming the state-of-the-art techniques, in the context of large data centers, by reducing energy consumption up to 29%, server resource wastage up to 85%, and network load up to 60%.
- Research Article
28
- 10.1007/s11227-020-03248-4
- Mar 16, 2020
- The Journal of Supercomputing
Improving the energy efficiency while guaranteeing quality of services (QoS) is one of the main challenges of efficient resource management of large-scale data centers. Dynamic virtual machine (VM) consolidation is a promising approach that aims to reduce the energy consumption by reallocating VMs to hosts dynamically. Previous works mostly have considered only the current utilization of resources in the dynamic VM consolidation procedure, which imposes unnecessary migrations and host power mode transitions. Moreover, they select the destinations of VM migrations with conservative approaches to keep the service-level agreements , which is not in line with packing VMs on fewer physical hosts. In this paper, we propose a regression-based approach that predicts the resource utilization of the VMs and hosts based on their historical data and uses the predictions in different problems of the whole process. Predicting future utilization provides the opportunity of selecting the host with higher utilization for the destination of a VM migration, which leads to a better VMs placement from the viewpoint of VM consolidation. Results show that our proposed approach reduces the energy consumption of the modeled data center by up to 38% compared to other works in the area, guaranteeing the same QoS. Moreover, the results show a better scalability than all other approaches. Our proposed approach improves the energy efficiency even for the largest simulated benchmarks and takes less than 5% time overhead to execute for a data center with 7600 physical hosts.
- Research Article
59
- 10.1016/j.suscom.2018.02.001
- Feb 9, 2018
- Sustainable Computing: Informatics and Systems
An energy efficient and SLA compliant approach for resource allocation and consolidation in cloud computing environments
- Conference Article
2
- 10.1109/pdcat.2017.00038
- Dec 1, 2017
The virtual machine (VM) live migration could achieve VM redistribution among distributed system hosts without reducing normal working performance. Post-copy is one of the wildly used VM live migration algorithm and has lots of advantages, such as less total migration time, less downtime, lower network overhead and so on. Its disadvantage is that the VM will be suspended frequently due to the page faults caused by the incomplete memory while VM is restored to run on destination host, which may lead to an extremely negative affect on VM work efficiency. To solve this problem, this paper proposes the pre-record algorithm. Pre-record extends the VM execution on source host, records the accessed memory pages during this period to obtain pre-recorded page set (PPS), and preferentially completes the migration of PPS to avoid page faults as much as possible. It also proposes PDoPMP algorithm to analysis the trend of the trajectory of memory address in PPS, in order to further expand the predict range of memory pages. The experimental results show that the pre-record has better efficiency than traditional post-copy, especially after combining with PDoPMP. It could obviously reduce the page faults number and then the total VM migration time without prolonging the downtime, and could improve VM migration efficiency under different workload and network conditions.
- Conference Article
25
- 10.1109/cloud.2014.58
- Jun 1, 2014
Traditional metrics for live migration of virtual machines (VM) include total migration time, downtime, network overhead, and application degradation. In this paper, we introduce a new metric, "eviction time", defined as the time to evict the entire state of a VM from the source host. Eviction time determines how quickly the source host can be taken offline, or the freed resources re-purposed for other VMs. In traditional approaches for live VM migration, such as pre-copy and post-copy, eviction time is equal to the total migration time, because the source and destination hosts are coupled for the duration of the migration. Eviction time increases if the destination host is slow to receive the incoming VM, such as due to insufficient memory or network bandwidth, thus tying up the source host. We present a new approach, called "Scatter-Gather" live migration, which reduces the eviction time when the destination host is resource constrained. The key idea is to decouple the source and the destination hosts. The source scatters the VM's memory state quickly to multiple intermediaries (hosts or middleboxes) in the cluster. Concurrently, the destination gathers the VM's memory from the intermediaries using a variant of post-copy VM migration. We have implemented a prototype of Scatter-Gather in the KVM/QEMU platform. In our evaluations, Scatter-Gather reduces the VM eviction time by up to a factor of 6 while maintaining comparable total migration time against traditional pre-copy and post-copy for a resource constrained destination.
- Research Article
68
- 10.1109/access.2017.2785280
- Jan 1, 2018
- IEEE Access
The design of good host overload/underload detection and virtual machine (VM) placement algorithms plays a vital role in assuring the smoothness of VM live migration. The presence of the dynamic environment that leads to a changing load on the VMs motivates us to propose a Markov prediction model to forecast the future load state of the host. We propose a host load detection algorithm to find the future overutilized/underutilized hosts state to avoid immediate VMs migration. Moreover, we propose a VM placement algorithm to determine the set of candidates hosts to receive the migrated VMs in a way to reduce their VM migrations in near future. We evaluate our proposed algorithms through CloudSim simulation on different types of PlanetLab real and random workloads. The experimental results show that our proposed algorithms have a significant reduction in terms of service-level agreement violation, the number of VM migrations, and other metrics than the other competitive algorithms.
- Research Article
1611
- 10.1002/cpe.1867
- Oct 7, 2011
- Concurrency and Computation: Practice and Experience
SUMMARYThe rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large‐scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allows Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy‐performance trade‐off, as aggressive consolidation may lead to performance degradation. Because of the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct a competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the service level agreement. We validate the high efficiency of the proposed algorithms by extensive simulations using real‐world workload traces from more than a thousand PlanetLab VMs. Copyright © 2011 John Wiley & Sons, Ltd.
- Research Article
19
- 10.1016/j.suscom.2018.05.012
- May 26, 2018
- Sustainable Computing: Informatics and Systems
Type-aware virtual machine management for energy efficient cloud data centers
- Conference Article
2
- 10.1109/ic2e.2014.61
- Mar 1, 2014
Live migration of virtual machines (VMs) can benefit data centers through load balancing, fault tolerance, energy saving, etc. Although live migration between geographically distributed data centers can enable optimized scheduling of resources in a large area, it remains expensive and difficult to implement. One of the main challenges is transferring the memory state over WAN. There is a conflict between the low data transmission speed over WAN and the rapid change of memory contents. This paper proposes a novel live migration method with page-count-based data deduplication, which takes advantage of the fact that VMs running same or similar operating systems and other software tend to have identical memory pages. Template pages are selected based on number of occurrences of each page across multiple VMs and indexed by content hash. When a memory page is transferred, the source host first compares it with the templates. If a match is identified, the source host transfers the index instead of the data of the memory page. The experimental results show that our approach reduces the migration time by 27% and the data transferred by 38% on average compared to the default method of QEMU-KVM.
- Conference Article
1
- 10.1109/ispdc2018.2018.00015
- Jun 1, 2018
Virtual machine (VM) placement is the process that allocates virtual machines onto physical machines (PMs) in cloud data centers. Reservation-based VM placement allocates VMs to PMs according to a (statically) reserved VM size regardless of the actual workload. If, at some point in time, a VM is making use of only a fraction of its reservation this leads to PM underutilization, which wastes energy and, at a grand scale, it may result in financial and environmental costs. In contrast, demand-based VM placement consolidates VMs based on the actual workload's demand. This may lead to better utilization, but it may incur a higher number of Service Level Agreement Violations (SLAVs) resulting from overloaded PMs and/or VM migrations from one PM to another as a result of workload fluctuations. To control the tradeoff between utilization and the number of SLAVs, parameter-based VM placement can allow a provider, through a single parameter, to explore the whole space of VM placement options that range from demand-based to reservation-based. The idea investigated by this paper is to adjust this parameter continuously at run-time in a way that a provider can maintain the number of SLAVs below a certain (predetermined) threshold while using the smallest possible number of PMs for VM placement. Two dynamic algorithms to select a value of this parameter on-the-fly are proposed. Experiments conducted using CloudSim evaluate the performance of the two algorithms using one synthetic and one real workload.
- Research Article
1
- 10.1016/j.measen.2024.101169
- Apr 26, 2024
- Measurement: Sensors
Deterministic Lightweight VM placement for HANDLING resource constraint issues in the cloud
- Research Article
16
- 10.1016/j.future.2017.09.024
- Oct 12, 2017
- Future Generation Computer Systems
Stochastic scheduling for variation-aware virtual machine placement in a cloud computing CPS
- Research Article
30
- 10.1007/s12083-016-0502-z
- Sep 30, 2016
- Peer-to-Peer Networking and Applications
The problem of Virtual Machine (VM) placement is critical to the security and efficiency of the cloud infrastructure. Nowadays most research focuses on the influences caused by the deployed VM on the data center load, energy consumption, resource loss, etc. Few works consider the security and privacy issues of the tenant data on the VM. For instance, as the application of virtualization technology, the VM from different tenants may be placed on one physical host. Hence, attackers may steal secrets from other tenants by using the side-channel attack based on the shared physical resources, which will threat the data security of the tenants in the cloud computing. To address the above issues, this paper proposes an efficient and secure VM placement strategy. Firstly, we define the related security and efficiency indices in the cloud computing system. Then, we establish a multi-objective constraint optimization model for the VM placement considering the security and performance of the system, and find resolution towards this model based on the discrete firefly algorithm. The experimental results in OpenStack cloud platform indicates that the above strategy can effectively reduce the possibility of malicious tenants and targeted tenants on the same physical node, and reduce energy consumption and resource loss at the data center.
- Research Article
63
- 10.1109/jsyst.2020.3002721
- Jun 30, 2020
- IEEE Systems Journal
Cloud computing efficiency greatly depends on the efficiency of the virtual machines (VMs) placement strategy used. However, VM placement has remained one of the major challenging issues in cloud computing mainly because of the heterogeneity in both virtual and physical machines (PMs), the multidimensionality of the resources, and the increasing scale of the cloud data centers (CDCs). An inefficiency in VM placement strategy has a significant influence on the quality of service provided, the amount of energy consumed, and the running costs of the CDCs. To address these issues, in this article, we propose a greedy randomized VM placement (GRVMP) algorithm in a large-scale CDC with heterogeneous and multidimensional resources. GRVMP inspires the "power of two choices" model and places VMs on the more power-efficient PMs to jointly optimize CDC energy usage and resource utilization. The performance of GRVMP is evaluated using synthetic and real-world production scenarios (Amazon EC2) with several performance matrices. The results of the experiment confirm that GRVMP jointly optimizes power usage and the overall wastage of resource utilization. The results also show that GRVMP significantly outperforms the baseline schemes in terms of the performance metrics used.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.