Abstract
In recent years, vast amounts of data of different kinds, from pictures and videos from our cameras to software logs from sensor networks and Internet routers operating day and night, are being generated. This has led to new big data problems, which require new algorithms to handle these large volumes of data and as a result are very computationally demanding because of the volumes to process. In this paper, we parallelize one of these new algorithms, namely, the HyperLogLog algorithm, which estimates the number of different items in a large data set with minimal memory usage, as it lowers the typical memory usage of this type of calculation from O(n) to O(1). We have implemented parallelizations based on OpenMP and OpenCL and evaluated them in a standard multicore system, an Intel Xeon Phi, and two GPUs from different vendors. The results obtained in our experiments, in which we reach a speedup of 88.6 with respect to an optimized sequential implementation, are very positive, particularly taking into account the need to run this kind of algorithm on large amounts of data.
Highlights
Very often the processing of very large data sets does not require accurate solutions, being enough to find approximate ones that can be achieved much more efficiently
In this paper we develop two parallel implementations of the HyperLogLog algorithm, one of them based on OpenMP and targeted to multicore processors and Intel Xeon Phi accelerators and another one based on OpenCL, which can be run on these systems and on other kinds of accelerators such as GPUs
The increasingly widespread management of large amounts of data is a critical challenge that demands both algorithms suited to the particular needs of these problems and optimized implementations of them
Summary
Very often the processing of very large data sets does not require accurate solutions, being enough to find approximate ones that can be achieved much more efficiently. HyperLogLog (HLL) [2] is a very powerful approximate algorithm in the sense that it can practically and efficiently give a good estimation of the cardinality of a data set, meaning the number of different items in it with respect to some characteristics. This value has many real life applications, making its computation a must for high profile companies working in big data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have