Abstract

Cache replacement policies are developed to help insure optimal use of limited resources. Varieties of such algorithms exist with relatively few that dynamically adapt to traffic patterns. Algorithms that are tunable typically utilize off-line training mechanisms or trial-and-error to determine optimal characteristics. Utilizing multiple algorithms to establish an efficient replacement policy that dynamically adapts to changes in traffic load and access patterns is a novel option that is introduced in this article. A simulation of this approach utilizing two existing, simple, and effective policies; namely, LRU and LFU was studied to assess the potential of the adaptive policy. This policy is compared and contrasted to other cache replacement policies utilizing public traffic samples mentioned in the literature as well as a synthetic model created from existing samples. Simulation results suggest that the adaptive cache replacement policy is beneficial, primarily in smaller cache sizes.

Highlights

  • Caching in computing has been a proven form of performance enhancement for some time, most notably in memory paging [1] [2]

  • While simple to implement and requiring less computational power than most other algorithms, Least Recently Used (LRU) has been outclassed by several other replacement algorithms: Balamash and Krunz‘s experiments showed that for large cache sizes, Least Unified Value (LUV), GDS, and Hyper-G produced better results for both hit ratio (HR) and latency ratio (LR) [3], while Bahn et al found that for large cache sizes, LUV, Hybrid, Size, Mix, and sw-Least Frequently Used (LFU) performed better for HR and LUV was better for LR [5]

  • Algorithm implementation is simple when compared to algorithms using multiple parameters, but exhibits generally poor performance: Balamash et al found that the Size algorithm was a middle-of-the-pack performer for HR and absolute worst for byte hit ratio (BHR) and LR using a simulated Digital Equipment Corporation (DEC) trace and compared against LUV, GDS, Hyper-G, LRU, and Hybrid algorithms

Read more

Summary

INTRODUCTION

Caching in computing has been a proven form of performance enhancement for some time, most notably in memory paging [1] [2]. A web cache typically stores its objects in some form of memory or disk Because these storage resources are finite, cache replacement policy algorithms are utilized to determine which objects to remove from the cache as new objects are accessed which are deemed more productive to cache [7]. These replacement policy algorithms will always choose to keep the objects that will provide the best performance. The overall cache space utilizes a tuning algorithm to allow the overall cache to choose the best policy based on current access patterns This effort focuses on caching within a web environment.

CACHE REPLACEMENT POLICIES
Size Algorithm
Hybrid Algorithm
Mix Algorithm
ADAPTIVE REPLACEMENT POLICY
PRELIMINARY RESULTS
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call