Abstract

While a NUMA system is being widely used as a target machine for virtualization, each data access request produced by a virtual machine (VM) on the NUMA system may have a different access time depending on not only remote access condition, but also shared resource contentions. Mainly due to this, each VM running on the NUMA system will have irregular data access performance at different times. Because existing hypervisors, such as KVM, VMware, and Xen, have yet to consider this, users of VMs cannot predict their data access performance or even recognize the data access performance they have experienced. In this paper, we propose a novel VM placement technique to resolve this issue pertaining to irregular data access performance of VMs running on the NUMA system. The hypervisor with our technique provides the illusion of a private memory subsystem to each VM, which guarantees the data access latency required by each VM on average. To enable this feature, we periodically evaluates the average data access latency of each VM using hardware performance monitoring units. After every evaluation, our Mcredit-based VM migration algorithm tries to migrate the VCPU or memory of the VM not meeting with its required data access latency to another node, giving the VM less data access latency. We implemented the prototype for KVM hypervisor on Linux 3.10.10. Experimental results show that, in the four-node NUMA system, our technique keeps the required data access performance levels of VMs running various workloads while it only consumes less than 1 percent of the cycles of a core and 0.3 percent of the system memory bandwidth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call