Abstract

During past decades, the capacity of NAND flash memory has been increasing dramatically, leading to the use of nonvolatile flash in the system’s memory hierarchy. The increasing capacity of NAND flash memory introduces a large RAM footprint to store the logical to physical address mapping. The demand-based approach can effectively reduce and well control the RAM footprint. However, extra address translation overhead is also introduced which may degrade the system performance. In this article, we present CDFTL, an adaptive Caching mechanism for Demand-based Flash Translation Layer, for NAND flash memory storage systems. CDFTL adopts both the fine-grained entry-based caching mechanism to exploit temporal locality and the coarse-grained translation-page-based caching mechanism to exploit spatial locality of workloads. By selectively caching the on-demand address mappings and adaptively changing the space configurations of two granularities, CDFTL can effectively utilize the RAM space and improve the cache hit ratio. We evaluate CDFTL under a real hardware embedded platform using a variety of I/O traces. Experimental results show that our technique can achieve an 11.13% reduction in average system response time and a 35.21% reduction in translation block erase counts compared with the previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call