Abstract

NAND flash memory has been widely used in various embedded systems. Due to the out-of-place update constraints, a component of address translator in NAND flash management is needed to translate logical address from file system to physical address in NAND flash. With the capacity increase of NAND flash, it becomes vitally important to take small RAM print of the address mapping table while not introducing big performance overhead. Demand-based address mapping is an effective approach to solve this problem by storing the address mapping table in NAND flash (called translation pages) and catching mapping items on-demand in RAM. However, in such address mapping method, there exists extra many translation pages that may incur much performance overhead. This paper solves two most important problems in translation page management. First, to reduce frequent translation page updates overhead, a page-level caching mechanism is proposed to unify the granularity of the cached mapping unit in NAND flash and in translation caching. Second, to reduce the garbage collection overhead from translation pages, a translation page based data-assemblage strategy is designed to group data pages corresponding to the same translation page into one data block, reducing the cost of translation page update during garbage collection to the minimal level. The presented scheme is evaluated using a set of benchmarks and is compared to a representative previous

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call