A novel memory allocation scheme for memory energy reduction in virtualization environment
A novel memory allocation scheme for memory energy reduction in virtualization environment
- Research Article
5
- 10.4018/ijcac.2019010102
- Jan 1, 2019
- International Journal of Cloud Applications and Computing
Virtualization has become a universal generalization layer in contemporary data centers. By multiplexing hardware resources into multiple virtual machines and facilitating several operating systems to run on the same physical platform at the same time, it can effectively decrease power consumption and building size or improve security by isolating virtual machines. In a virtualized system, memory resource supervision acts as a decisive task in achieving high resource employment and performance. Insufficient memory allocation to a virtual machine will degrade its performance drastically. On the contrasting, over allocation reasons ravage of memory resources. In the meantime, a virtual machine's memory stipulates may differ drastically. As a consequence, effective memory resource management calls for a dynamic memory balancer, which, preferably, can alter memory allocation in a timely mode for each virtual machine-based on their present memory stipulate and therefore realize the preeminent memory utilization and the best possible overall performance. Migrating operating system instances across discrete physical hosts is a helpful tool for administrators of data centers and clusters: It permits a clean separation among hardware and software, and make easy fault management. In order to approximate the memory, the stipulate of each virtual machine and to adjudicate probable memory resource disagreement, an extensively planned approach is to build an Least Recently Used based miss ratio curve which provides not only the current working set size but also the correlation between performance and the target memory allocation size. In this paper, the authors initially present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL based Least Recently Used association, dynamic hot set sizing. This assessment outcome confirms that, for the complete SPEC CPU 2006 benchmark set, subsequent to pertaining the 3 optimizing techniques, the mean overhead of MRC construction are lowered from 173% to only 2%. Based on current WSS, the authors then predict its trend in the near future and take different tactics for different forecast results. When there is an adequate amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. These experimental results show that this design achieves 49% center-wide speedup.
- Research Article
1
- 10.1007/s11434-010-9985-9
- Sep 1, 2010
- Chinese Science Bulletin
Dynamic memory mapping delivers additional flexibility to virtual resource management
- Conference Article
5
- 10.1109/icpads.2011.67
- Dec 1, 2011
Increasing Internet business and computing footprint motivate server consolidation in data centers. Through virtualization technology, server consolidation can reduce physical hosts and provide scalable services. However, the ineffective memory usage among multiple virtual machines (VMs) becomes the bottleneck in server consolidation environment. Because of inaccurate memory usage estimate and the lack of memory resource managements, there is much service performance degradation in data centers, even though they have occupied a large amount of memory. In order to improve this scenario, we first introduce VM's memory division view and VM's free memory division view. Based on them, we propose a hierarchal memory service mechanism. We have designed and implemented the corresponding memory scheduling algorithm to enhance memory efficiency and achieve service level agreement. The benchmark test results show that our implementation can save 30% physical memory with 1% to 5% performance degradation. Based on Xen virtualization platform and balloon driver technology, our works actually bring dramatic benefits to commercial cloud computing center which is providing more than 2,000 VMs' services to cloud computing users.
- Research Article
120
- 10.1109/tnet.2014.2343945
- Oct 1, 2015
- IEEE/ACM Transactions on Networking
Virtualization technology and the ease with which virtual machines (VMs) can be migrated within the LAN have changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning resources from a single data center to multiple data centers spread across the country or around the world. In this paper, we present the CloudNet architecture consisting of cloud computing platforms linked with a virtual private network (VPN)-based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high-latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 19 GB, a 50% reduction.
- Research Article
67
- 10.1145/2007477.1952699
- Mar 9, 2011
- ACM SIGPLAN Notices
Virtual machine technology and the ease with which VMs can be migrated within the LAN, has changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning compute resources from a single data center to multiple data centers spread across the country or around the world. In this paper we present the CloudNet architecure as a cloud framework consisting of cloud computing platforms linked with a VPN based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 19GB, a 50% reduction.
- Conference Article
219
- 10.1145/1952682.1952699
- Mar 9, 2011
Virtual machine technology and the ease with which VMs can be migrated within the LAN, has changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning compute resources from a single data center to multiple data centers spread across the country or around the world. In this paper we present the CloudNet architecure as a cloud framework consisting of cloud computing platforms linked with a VPN based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 19GB, a 50% reduction.
- Conference Article
21
- 10.1109/dsnw.2011.5958816
- Jun 1, 2011
In conventional virtualized systems, a hypervisor can access the memory pages of guest virtual machines without any restriction, as the hypervisor has a full control over the address translation mechanism. In this paper, we propose Secure MMU, a hardware-based mechanism to isolate the memory of guest virtual machines from unauthorized accesses even from the hypervisor. The proposed mechanism extends the current nested paging support for virtualization with a small hardware cost. With Secure MMU, the hypervisor can flexibly allocate physical memory pages to virtual machines for resource management, but update nested page tables only through the secure hardware mechanism, which verifies each mapping change. With the hardware-rooted memory isolation among virtual machines, the memory of a virtual machine in cloud computing can be securely protected from a compromised hypervisor or co-tenant virtual machines.
- Conference Article
1
- 10.1109/bdcloud.2018.00081
- Dec 1, 2018
With the rapid increase of data set size of cloud and big data applications, conventional regular 4KB pages can cause high pressure on hardware address translations. The pressure becomes more prominent in a virtualized system, which adds an additional layer of address translation. Virtual to physical address translations reply on a hardware Translation Lookaside Buffer (TLB) to cache address mappings. However, even modern hardware offers a very limited number of TLB entries. Meanwhile, TLB misses can cause significant performance degradation. Using 2MB or 1GB hugepages can improve TLB coverage and reduce TLB miss penalty. Therefore, recent operation systems, such as Linux, start to adopt hugepages. However, using hugepages bring new challenges, among which is working set size prediciton. In a virtualized system, working set size (WSS) estimation, which predicts the actual memory demand of a virtual machine, is often applied to guide virtual machine memory management and memory allocation. We find that traditional WSS estimation methods with regular pages cannot be simply ported to a system adopting hugepages. We estimate the working set size of a virtual machine by constructing a miss ratio curve (MRC), which relates page miss ratio to the virtual machine memory allocation. Using hugepages increases the overhead to track page accesses for MRC construction and also demands much higher precision in representing the miss ratios as a hugepage miss leads to a much higher penalty than a regular page miss. In this paper, we propose an accurate WSS estimation method in a virtual execution environment with hugepages. We design and implement a low overhead dynamic memory tracking mechanism by utilizing a hot set to filter frequent short-reuse accesses. Our approach is able to output a hugepage miss ratio at high precision. The experimental results show that our method can predict WSS accurately with an average overhead of 1.5%.
- Conference Article
4
- 10.1109/igcc.2011.6008556
- Jul 1, 2011
Main memory is one of the primary shared resources in a virtualized environment. Current trends in supporting a large number of virtual machines increase the demand for physical memory, making energy efficient memory management more significant. Several optimizations for memory energy consumption have been recently proposed for standalone operating system environments. However, these approaches cannot be directly used in a virtual machine environment because a layer of virtualization separates hardware from the operating system and the applications executing inside a virtual machine. We first adapt existing mechanisms to run at the VMM layer, offering transparent energy optimizations to the operating systems running inside the virtual machines. Static approaches have several weaknesses and we propose a dynamic approach that is able to optimize energy consumption for currently executing virtual machines and adapt to changing virtual machine behaviors. Through detailed trace driven simulation, we show that proposed dynamic mechanisms can reduce memory energy consumption by 63.4% with only 0.6% increase in execution time as compared to a standard virtual machine environment.
- Book Chapter
- 10.1007/978-981-15-0978-0_25
- Jan 1, 2020
It has been observed that cloud computing environments sometimes may provide significant benefits, including reconfiguring virtualized resources on demand, which may be very much beneficial toward deploying cloud services. Earlier, particularly in traditional data centers, usually, applications may be tied to specific physical servers to deal with the upper-bound assigned tasks. In that case, the data centers may be expensive to maintain low resource utilization associated with virtual technology. Of course, the cloud data centers are more flexible and secure while providing better support for on-demand allocation as well. It may improve server utilization and signifies appropriate virtualization technology. As the cost of current data centers may be mostly driven by their energy consumption, sometimes challenges may have to be faced regarding the cost of energy per each virtual machine while being associated with heterogeneous environment. Practically, while designing the private cloud, major challenges associated with cloud computing environment may be faced. As in this consideration, each virtual machine may be mapped toward the physical host in accordance with the available resource on the host machine, accordingly, quantifying the performance of scheduling, and allocating cloud infrastructure may be extremely challenging. In this paper, focused is on virtualized data and evaluation mechanisms associated with data servers as well as data centers.
- Research Article
- 10.3724/sp.j.1087.2013.00254
- Sep 23, 2013
- Journal of Computer Applications
In a Virtual Machine(VM) computing environment,it is difficult to monitor and allocate the VM's memory in real-time.To overcome these shortcomings,a real-time method of monitoring and adjusting memory for Xen virtual machine called Xen Memory Monitor and Control(XMMC) was proposed and implemented.This method used hypercall of Xen,which could not only real-time monitor the VM's memory usage,but also dynamically real-time allocated the VM's memory by demand.The experimental results show that XMMC only causes a very small performance loss,less than 5%,to VM's applications.It can real-time monitor and adjust on demand VM's memory resource occupations,which provides convenience for the management of multiple virtual machines.
- Research Article
4
- 10.1007/s11390-020-9693-0
- Mar 1, 2020
- Journal of Computer Science and Technology
With the rapid increase of memory consumption by applications running on cloud data centers, we need more efficient memory management in a virtualized environment. Exploiting huge pages becomes more critical for a virtual machine’s performance when it runs large working set size programs. Programs with large working set sizes are more sensitive to memory allocation, which requires us to quickly adjust the virtual machine’s memory to accommodate memory phase changes. It would be much more efficient if we could adjust virtual machines’ memory at the granularity of huge pages. However, existing virtual machine memory reallocation techniques, such as ballooning, do not support huge pages. In addition, in order to drive effective memory reallocation, we need to predict the actual memory demand of a virtual machine. We find that traditional memory demand estimation methods designed for regular pages cannot be simply ported to a system adopting huge pages. How to adjust the memory of virtual machines timely and effectively according to the periodic change of memory demand is another challenge we face. This paper proposes a dynamic huge page based memory balancing system (HPMBS) for efficient memory management in a virtualized environment. We first rebuild the ballooning mechanism in order to dispatch memory in the granularity of huge pages. We then design and implement a huge page working set size estimation mechanism which can accurately estimate a virtual machine’s memory demand in huge pages environments. Combining these two mechanisms, we finally use an algorithm based on dynamic programming to achieve dynamic memory balancing. Experiments show that our system saves memory and improves overall system performance with low overhead.
- Conference Article
17
- 10.1109/ccgrid.2014.107
- May 1, 2014
Live migration of virtual machines is the ability to move running virtual machines between two computers with minimal downtime. Although various migration mechanisms such as pre-copy, post-copy, and state compression have been proposed, they may suffer long migration times when the migrating virtual machines run large computation and memory intensive workloads. This paper presents the design and implementation of a novel Time-bound, thread-based Live Migration (TLM) mechanism, where additional threads are added to the pre-copy live migration algorithm to handle virtual machine state transfers within a bounded time period. In the time-bound principle, the upper-bound migration time of a virtual machine is proportional to the size of the virtual machine's memory. We propose a CPU over-committing mechanism to minimize migration downtime and avoid performance impacts to other virtual machines when the migration threads are in operation. We have implemented a prototype implementation of TLM on KVM, and conducted experiments by migrating virtual machines running a number of Class D OpenMP and MPI NAS parallel benchmarks. Experimental results showed the following: (i) TLM finished live migration in a bounded time period. Users are able to measure progress of migration operation. (ii) The CPU over-committing mechanism can be used to minimize live migration downtime. However, communication performance of virtual machines during live migration also declined as the number of over-committed CPUs reduced. The patterns of decline depended on execution behaviors of the applications on the virtual machines. (iii) The execution time increases of the OpenMP and MPI versions of the MG and IS benchmarks in our experiments were approximately equal to the migration times of TLM. (iv) We evaluated our CPU over-committing mechanism against the auto-convergence mechanism recently developed in kvm-1.6. We found that both mechanisms have their pros and cons, and their performance results are varied with application. Based on these results, we believe that the TLM design is practical for live migration of virtual machines running memory-intensive workloads, and the time-bound principle is an important new feature for pre-copy live migration optimization.
- Research Article
14
- 10.1016/j.jksuci.2023.04.002
- Apr 20, 2023
- Journal of King Saud University - Computer and Information Sciences
Virtualization technology represented through Virtual Machines (VMs) is recognized as a key infrastructure in cloud computing. This technology is developing rapidly and cloud data centers face challenges such as Virtual Machine Placement (VMP) for energy efficiency. VMP is defined as the efficient allocation of VMs to Host Machines (HMs) to achieve various objectives such as reducing energy consumption, load balancing and avoid Service Level Agreement Violations (SLAV). In this paper, VMP is addressed using a Deep Reinforcement Learning (DRL) based strategy to determine the best mapping between VMs and HMs. We present VMP-A3C, an effective strategy to solve VMP using Asynchronous Advantage Actor-Critic (A3C) algorithm as a new DRL approach. VMP-A3C aims at load balancing in HMs without SLAV, where energy consumption is reduced as much as possible. VMP-A3C learns to dynamically consolidate VMs using migration techniques to a minimum number of HMs. We believe that there is scope for improvements in shutting down little-workload HMs through VMs migration. The effectiveness of the proposed algorithm has been evaluated from various aspects such as the deployment rate, energy consumption, SLAV, the number of shutdown HMs and the number of migrated VMs. The main difference in terms of energy consumption and the number of required HMs between VMP-A3C and the best existing state-of-the-art method is 2.54% and 7.14%, respectively.
- Dissertation
- 10.6845/nchu.2010.00763
- Jan 1, 2011
To build clusters or data centers with virtual machines, it is required to accompany with an efficient management system. In the virtual machines management system, virtual machine migration could be the most critical challenge for the system administrator. When the environment changes, live migration from the serving machine to another transparently is preferable. In the computing environment combining of cloud computing and virtualization technologies, we can simply connect two different private clouds by live storage migration if both have VMM (virtual machine monitor) supports. Although several VMMs have been presented to support storage migration recently, how to effectively reduce long service downtime and migration time remains a challenging research subject. In this paper, based on kernel-based virtual machines (KVMs), we use the Copy-on-Write (CoW) technology to copy virtual machine file-systems into templates and to record the file-system updates in overlay files. While doing storage migration, we can reduce the transfer time by only sending the overlay files to the destination host if the same template already exists. Most overlay files are transferred using secure remote copy (SCP). That is to say, all on-going updates are iteratively bundled and processed through SCP until the last portion of update, which is transferred using regular QEMU block transformation. Through modification to the source code of QEMU, we are able to effectively accomplish the migration. With applying CoW technology for migration, we can send the template to the destination host in batch. In our experiments, about 80 seconds can be saved without sending a 3GB template file. However, if the virtual machine is running a process with a large number of I/Os, the advantage of using CoW technology will be insignificant.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.