Abstract

This study proposes a cache replacement policy technique to increase the cache hit rate. This policy can improve the efficiency of cache management and performance. Heuristic cache replacement policies are mechanisms that are designed empirically in advance to determine what needs to be replaced. This study explains why the heuristic policy does not achieve a high accuracy for certain patterns of data. A machine learning method is proposed to predict the blocks that need to be requested in the future to prevent erroneous decisions. The core operation of the proposed method is that when a cache miss occurs, the machine learning model predicts a future block reference sequence that is based on the block reference sequence of the input sequence. The predicted block is added to the prediction buffer and the predicted block is removed from the non-access buffer if it exists in the non-access buffer. After filling the prediction buffer, the conventional replacement policy can be replaced with a time complexity of O(1) by replacing the block with a non-access buffer. The proposed method improves the least recently used (LRU) algorithm by 77%, the least frequently used (LFU) algorithm by 65%, and the adaptive replacement cache (ARC) by 77% and shows a hit rate similar to that of state-of-the-art research. The proposed method reinforces the existing heuristic policy and enables a consistent performance for LRU- and LFU-friendly workloads.

Highlights

  • C ACHE is a concept that is used to reduce the performance difference between storage layers; it is applied in a variety of fields such as operating systems, databases, and network systems [1]– [3]

  • Solid-state drives (SSDs) provide faster speeds than hard disk drives (HDDs), but a system performance bottleneck remains because the central processing unit (CPU) and dynamic random access memory (DRAM) provide three times lower access latency [4]

  • If the number of failures is greater than or equal to a set threshold, the model fills the prediction buffer again to increase the accuracy of the prediction buffer

Read more

Summary

Introduction

C ACHE is a concept that is used to reduce the performance difference between storage layers; it is applied in a variety of fields such as operating systems, databases, and network systems [1]– [3]. Solid-state drives (SSDs) provide faster speeds than hard disk drives (HDDs), but a system performance bottleneck remains because the central processing unit (CPU) and dynamic random access memory (DRAM) provide three times lower access latency [4]. To address this bottleneck, a cache is placed between the storage tiers to store frequently used items. The LFU adapts well to the looping pattern, but it cannot adapt to current changes because it only remembers the previously requested number of times when the working set changes frequently To solve this problem, policies that combine these two algorithms have

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.