Abstract

Hadoop Distributed File System (HDFS) is designed to reliably storage and manage large-scale files. All the files in HDFS are managed by a single server, the NameNode. The NameNode stores metadata, in its main memory, for each file stored into HDFS. HDFS suffers the penalty of performance with increased number of small files. It imposes a heavy burden to the NameNode to store and manage a mass of small files. The number of files that can be stored into HDFS is constrained by the size of NameNode’s main memory. In order to improve the efficiency of storing and accessing the small files on HDFS, we propose Small Hadoop Distributed File System (SHDFS), which bases on original HDFS. Compared to original HDFS, we add two novel modules in the proposed SHDFS: merging module and caching module. In merging module, the correlated files model is proposed, which is used to find out the correlated files by user-based collaborative filtering and then merge correlated files into a single large file to reduce the total number of files. In caching module, we use Log - linear model to dig out some hot-spot data that user frequently access to, and then design a special memory subsystem to cache these hot-spot data. Caching mechanism speeds up access to hot-spot data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call