Abstract

Solid-state drives (SSDs) have recently become a common storage component in computer systems, and they are fueled by continued bit cost reductions achieved with smaller feature sizes and multiple-level cell technologies. However, as the flash memory stores more bits per cell, the performance and reliability of the flash memory degrade substantially. To solve this problem, a fast non-volatile memory (NVM-)based cache has been employed within SSDs to reduce the long latency required to write data. Absorbing small writes in a fast NVM cache can also reduce the number of flash memory erase operations. To maximize the benefits of an NVM cache, it is important to increase the NVM cache utilization. In this paper, we propose and study ProCache, a simple NVM cache management scheme, that makes cache-entrance decisions based on random probability testing. Our scheme is motivated by the observation that frequently written hot data will eventually enter the cache with a high probability, and that infrequently accessed cold data will not enter the cache easily. Owing to its simplicity, ProCache is easy to implement at a substantially smaller cost than similar previously studied techniques. We evaluate ProCache and conclude that it achieves comparable performance compared to a more complex reference counter-based cache-management scheme.

Highlights

  • As circuit, manufacturing, and architectural innovations have led to attractive flash memorybased solid-state disks (SSDs) from a performance-cost perspective, SSDs have become a common storage component in recent computer systems

  • A fast non-volatile memory (NVM-)based cache is widely used in modern SSDs

  • This paper investigates a data caching management scheme that exploits the characteristics of a multiple-level cell flash memory

Read more

Summary

Introduction

As circuit, manufacturing, and architectural innovations have led to attractive flash memorybased solid-state disks (SSDs) from a performance-cost perspective, SSDs have become a common storage component in recent computer systems. In order to maintain a higher hit ratio, and maximize the cache utilization, various mechanisms have been proposed for SSDs [2,3,4,5,6,7,8,9]. These schemes require either high computational overhead or significant memory space, both of which significantly affect the cost of the storage system and mitigate the effectiveness of the cache.

Background
Probability-based cache management
Trace-driven analysis results
Comparison of off-line and ProCache algorithms
Effect of p
Effect of c
Effect of warm-up
Performance comparisons
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.