Abstract
AbstractWhen dealing with the storage of large files, HDFS is one of the good choices as a distributed storage. Processing a large number of small files results in the performance bottleneck of HDFS. A massive number of small files will produce excessive metadata that leads to inefficient utilization of the Name Node memory, and frequent function calls will consume all over more time to process; therefore, it can be concluded that HDFS degrades when handling with small files. A detailed performance evaluation is being conducted to understand the impact of increasing small files in Hadoop for processing. This paper mainly evaluates sequential files, CombineFileInputFormat, HAR and Hadoop streaming techniques to deal with small file problem in HDFS. Empirical evaluation conducted in this paper shows that HAR and CombineFileInputFormat perform better and have consistent and stable results when increasing number of files for processing. KeywordsHadoopMapReduceHARHadoop streamingSequential fileCombineFileInputFormatSmall filesHDFS
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.