Abstract

Hadoop HDFS is an open source project from Apache Software Foundation for scalable, distributed computing and data storage. HDFS has become a critical component in today's cloud computing environment and a wide range of applications built on top of it. However, the initial design of HDFS has introduced a single-point-of-failure, HDFS contains only one active name node, if this name node experiences software or hardware failures, the whole HDFS cluster is unusable until the recovery of name node is finished, this is the reason why people are reluctant to deploy HDFS for an application whose requirement is high availability. In this paper, we present a solution to enable the high availability for HDFS's name node through efficient metadata replication. Our solution has two major advantages than existing ones: we utilize multiple active name nodes, instead of one, to build a cluster to serve request of metadata simultaneously. We implements a pub/sub system to handle the metadata replication process across these active namonodes efficiently. Based on the solution we implement a prototype called NCluster and integrate it with HDFS. We also evaluate NCluster to exhibit its feasibility and effectiveness. The experimental results show that our solution performs well with low replication cost, good throughput and scalability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.