Abstract

Even in modern SSDs, the disparity between random and sequential write bandwidth is more than 10-fold. Moreover, random writes can shorten the limited lifespan of SSDs because they incur more NAND block erases per write. To overcome the problems of random writes, we propose a new file system, SFS, for SSDs. SFS is similar to the traditional log-structured file system (LFS) in that it transforms all random writes at the file system level to sequential ones at the SSD level, as a way to exploit the maximum write bandwidth of the SSD. But, unlike the traditional LFS, which performs hot/cold data separation on segment cleaning, SFS takes a new on writing data grouping strategy. When data blocks are to be written, SFS puts those with similar update likelihood into the same segment for sharper bimodal distribution of segment utilization, and thus aims at minimizing the inevitable segment cleaning overhead that occurs in any log-structured file system. We have implemented a prototype SFS by modifying Linux-based NILFS2 and compared it with three state-of-the-art file systems using several realistic workloads. Our experiments on SSDs show that SFS outperforms LFS by up to 2.5 times in terms of throughput. In comparison to modern file systems, SFS drastically reduces the block erase count inside SSDs by up to 23.3 times. Although SFS was targeted for SSDs, its data grouping on writing would also work well in HDDs. To confirm this, we repeated the same set of experiments over HDDs, and found that SFS is quite promising in HDDs: although the slow random reads in HDDs make SFS slightly less effective, SFS still outperforms LFS by 1.7 times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call