Abstract

Computer clusters with the shared-nothing architecture are the major computing platforms for big data processing and analysis. In cluster computing, data partitioning and sampling are two fundamental strategies to speed up the computation of big data and increase scalability. In this paper, we present a comprehensive survey of the methods and techniques of data partitioning and sampling with respect to big data processing and analysis. We start with an overview of the mainstream big data frameworks on Hadoop clusters. The basic methods of data partitioning are then discussed including three classical horizontal partitioning schemes: range, hash, and random partitioning. Data partitioning on Hadoop clusters is also discussed with a summary of new strategies for big data partitioning, including the new Random Sample Partition (RSP) distributed model. The classical methods of data sampling are then investigated, including simple random sampling, stratified sampling, and reservoir sampling. Two common methods of big data sampling on computing clusters are also discussed: record-level sampling and block-level sampling. Record-level sampling is not as efficient as block-level sampling on big distributed data. On the other hand, block-level sampling on data blocks generated with the classical data partitioning methods does not necessarily produce good representative samples for approximate computing of big data. In this survey, we also summarize the prevailing strategies and related work on sampling-based approximation on Hadoop clusters. We believe that data partitioning and sampling should be considered together to build approximate cluster computing frameworks that are reliable in both the computational and statistical respects.

Highlights

  • An overwhelming volume of data is being generated from business transactions, computerC The author(s) 2020

  • The survey presented in this paper gives a concise summary of the most common methods of partitioning and sampling to support big data analysis on Hadoop clusters

  • Big Data Mining and Analytics, June 2020, 3(2): 85–101 sampling-based approximate big data analysis, we present a concise overview of these methods with respect to big data on Hadoop clusters

Read more

Summary

Introduction

An overwhelming volume of data is being generated from business transactions, computerC The author(s) 2020. The MapReduce computing model[5] is used to apply this strategy in the mainstream big data analysis frameworks[6,7,8,9], such as Apache Hadoop (http://hadoop.apache.org/) and Apache Spark (http://spark.apache.org/). These frameworks implement a shared-nothing architecture (https://www.oreilly.com/learning/processing-data-inhadoop) where each node is independent in terms of both data and resources. Studies have shown that when the data size is large enough, parallelization based on distributed data blocks can result in a linear speed-up as computing resources increase in the cluster[11]. Scaling-out a computing cluster requires additional costs and the necessary investment may not be always available in practice[12]

Objectives
Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.