Abstract

In a virtualized server, a page cache (which caches I/O data) is managed in both the hypervisor and the guest Operating System (OS). This leads to a well-known issue called double caching. Double caching is the presence of the same data in both the hypervisor page cache and guest OS page caches. Some experiments we performed show that almost 64% of the hypervisor page cache content is also present in guest page caches.Therefore, double caching is a huge source of memory waste in virtualized servers, particularly when I/O disk-intensive workloads are executed (which is common in today’s data centers). Knowing that memory is the limiting resource for workload consolidation, double caching is a critical issue.This paper presents a novel solution (called Cacol) to avoid double caching. Unlike existing solutions, Cacol is not intrusive as it does not require guest OS modification and induces very little performance overhead for user applications. We implemented Cacol in KVM hypervisor, a very popular virtualization system. We evaluated Cacol and compared it with Linux default solutions. The results confirm the advantages (no guest modification and no overhead) of Cacol against existing solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call