Abstract

NAND Flash memory-based SSDs have been widely used in modern storage systems to improve the performance and energy consumption. Existing caching management schemes rarely consider the utilization of parallel resources in flash memory array inside an SSD while these parallel resources cannot be fully exploited. The I/Os-access conflicts deteriorate the SSD performance and enlarge the response time under write-intensive workloads. Most cache replacement strategies give priority to clean data to improve the I/O-access conflicts, thereby boosting the SSD performance. However, clean data entirely depends on missed read-I/Os requests. Ideally, data replacement ought to evict clean data in the cache thanks to low replacement cost. As such, flash memory-based caching schemes give the highest priority to clean data. However, the proportion of replaced clean data is low in cache data. To solve this issue, we propose a dynamic active and collaborative cache management named DAC, in which the cache is composed of cold cache, hot cache, ghost cold cache, and ghost hot cache. DAC determines hot-cold changes of the I/O requests in workloads based on ghost cache size, thereby adjusting the real cache size to well serve I/O requests. The dynamic write-back window (DWW) mechanism dynamically adjusts the write-back window size in the cold cache. The write-back thresholds automatically update according to the changes in I/O patterns in workloads. When a flash chip is in the idle state, DAC produces clean data by proactively writing normal cold cache data back into flash memory, and the clean data is migrated into the active cold cache. DAC prioritizes replacing data in the active cold cache, aiming to avoid evicting data that are just read from flash memory. This method is adept at boosting the replacement rate of clean data, thereby optimizing SSD performance. We undertake extensive experiments in the latest NVMe SSD simulator to validate the DAC’s edge over the seven existing caching schemes including LRU, CFLRU, GCaR-CFLRU, LCR, ARC, AD-LRU, and MQsim. The results unveil that our DAC shortens the response time by up to 60.8% with an average optimization rate of 24.4%. When it comes to response-time cliff, the maximum improvement offered by DAC reaches approximately 68.71%; meanwhile, DAC delivers an average optimization rate of 26.9%. Besides, the number of erase count reduces by up to 93.3% with an average improvement rate of 17.04%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call