Abstract

Merge & Reduce is a general algorithmic scheme in the theory of data structures. Its main purpose is to transform static data structures—that support only queries—into dynamic data structures—that allow insertions of new elements—with as little overhead as possible. This can be used to turn classic offline algorithms for summarizing and analyzing data into streaming algorithms. We transfer these ideas to the setting of statistical data analysis in streaming environments. Our approach is conceptually different from previous settings where Merge & Reduce has been employed. Instead of summarizing the data, we combine the Merge & Reduce framework directly with statistical models. This enables performing computationally demanding data analysis tasks on massive data sets. The computations are divided into small tractable batches whose size is independent of the total number of observations n. The results are combined in a structured way at the cost of a bounded O(log n) factor in their memory requirements. It is only necessary, though nontrivial, to choose an appropriate statistical model and design merge and reduce operations on a casewise basis for the specific type of model. We illustrate our Merge & Reduce schemes on simulated and real-world data employing (Bayesian) linear regression models, Gaussian mixture models and generalized linear models.

Highlights

  • In recent times, data sets with a massive number of observations have become more and more present, making scalability one of the main challenges of modern data analysis

  • We propose a method called Merge & Reduce as a technique to address these scalability limitations in regression analysis

  • This can be leveraged to design streaming algorithms for a computational problem based on coresets

Read more

Summary

Introduction

Data sets with a massive number of observations have become more and more present, making scalability one of the main challenges of modern data analysis. Merge & Reduce is well known in computer science and has mainly been used for transforming static data structures to dynamic data structures with little overhead [8] This can be leveraged to design streaming algorithms for a computational problem based on coresets. For some statistical problems, it is known that small coresets do not exist in the worst case This is true, e.g., for specific generalized linear models, see the lower bounds in [37,39]. International Journal of Data Science and Analytics (2020) 10:331–347 small batches as concise summaries Combining these statistical models via the Merge & Reduce framework, we can again turn an offline algorithm into a data stream algorithm. The algorithms should be amenable to distributed computing environments like MapReduce [16]

Our contribution
Related work
Preliminaries and notation
The principle
Simulation study
Data generation
Linear regression
Linear regression in the presence of mixtures
Poisson regression
Bicycle data
Conclusions
Findings
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call