Abstract

Small file accesses are still limited by disk head movement on modern disk drives with the high disk bandwidth. Small file performance can be improved by grouping and clustering, each of which places multiple files in a directory and places blocks of the same file on disks contiguously. These schemes make it possible for file systems to use large data transfers in accessing small files, reducing disk accesses. However, as file systems become aged, disks become too fragmented to support the grouping and clustering of small files. This fragmentation makes it difficult for file systems to take advantage of large data transfers, increasing disk I/Os. To offer a solution to this problem, we describe a de-fragmented file system (DFS). By using data cached in memory, DFS relocates and clusters data blocks of small fragmented files in a dynamic manner. Besides, DFS clusters related small files in the same directory, at contiguous disk locations. Measurements of DFS implementation show that the techniques alleviate file fragmentation significantly and, in particular performance for small file reads exceeds that of a traditional file system by 78%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call