Abstract

Hadoop Distributed File System (HDFS) becomes a representative cloud platform, benefiting from its reliable, scalable and low-cost storage capability. However, HDFS does not present good storage and access performance when processing a huge number of small files, because massive small files bring heavy burden on NameNode of HDFS. Meanwhile, HDFS does not provide any optimization solution for storing and accessing small files, as well as no prefetching mechanism to reduce I/O operations. This paper proposes an optimized scheme, Structured Index File Merging-SIFM, using two level file indexes, the structured metadata storage, and prefetching and caching strategy, to reduce the I/O operations and improve the access efficiency. Extensive experiments demonstrate that the proposed SIFM can effectively achieve better performance in the terms of the storing and accessing for a large number of small files on HDFS, compared with native HDFS and HAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call