Abstract

Most flash-based solid-state drives (SSDs) adopt an on-board Dynamic Random Access Memory (DRAM) to buffer hot write data. Then, the write or overwrite operations can be absorbed by the DRAM cache, given that there is sufficient locality in the applications’ I/O access pattern, to consequently avoid flushing the write data onto underlying SSD cells. After analyzing typical real-world workloads over SSDs, we observed that the buffered data of small size requests are more likely to be re-accessed than those of large write requests. To efficiently utilize the limited space of DRAM cache, this paper proposes an adaptive request granularity-based cache management scheme for SSDs. First, we introduce a request block corresponding to a write request, as the cache management granularity, and propose a dynamic manner for classifying small and large request blocks. Next, we design three-level linked lists for supporting different routines of upgradation for small and large request blocks, once their data have been hit in the cache. At last, we present a scheme of evicting the request blocks having the minimum cost in cache replacement, by taking both factors of access hotness and time discounting into account. Experimental results show that our proposal can yield improvements on cache hits and the overall I/O latency by 21.8% and 14.7% on average, compared to state-of-the-art cache management schemes inside SSDs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.