Abstract

Multi-level buffer cache hierarchies are now commonly seen in most client/server cluster configurations, especially in today’s big data application deployment. However, multi-level caching policies deployed so far typically use independent cache replacement algorithms in each level, which has two major drawbacks: (1) File blocks may be redundantly cached on multiple levels, reducing the actual aggregate cache usable size; (2) Less accurate replacement decisions at lower level caches due to weakened locality. Inefficient cache resource usage may result in noticeable performance degradation for big data applications.To address these problems, we propose new adaptive multi-level exclusive caching policies that can dynamically adjust replacement and placement decisions in response to changing access patterns. (1) First, to capture locality information in multi-level cache hierarchies, we propose a Reuse Distance based Adaptive Replacement Caching (ReDARC) algorithm that adopts reuse distance as the means of locality measure and adaptively balances between the Small Reuse Distance (SRD) set and Large Reuse Distance (LRD) set. (2) Second, to achieve exclusive caching and make global caching decisions, we propose an Adaptive Level-Aware Caching Algorithm (ALACA) that works collaboratively with ReDARC. The ALACA algorithm uses an adaptive probabilistic PUSH technique that allows lower caches to push blocks to higher caches and appropriately decide blocks’ caching locations with the ReDARC algorithm. In this way, we achieve multi-level exclusive caching with significant cache performance improvement. Our trace-driven simulation experiments show that the policies we proposed achieve a reduction of the client average response time of 8 percent to 56 percent over other multi-level cache schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call