Abstract

Big data is a ”relative” concept. It is the combination of data, application, and platform properties. The term <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">big data</i> has been used with almost every problem involving large size, real time, and heterogeneous data. However, these data attributes are not enough to identify big data by ignoring the application and platform properties for finding processing thresholds. The equivocated identification of big data can lead to an inefficient use of optimization techniques, resulting into global inefficiency, reduced system performance, increasing power consumption, requiring greater effort on the part of the programming team, and misallocation of the hardware resources required for the task. In this regard, a structured approach has been presented for identification of big data. The approach is based on three equations that categorize the Volume, Velocity, and Variety characteristics by relating data, application, and platform properties. The 3Vs identification is necessary for enabling the relevant optimization techniques. In addition to 3Vs identification, it is required to discriminate whether the big data is due to 1V, 2Vs or 3Vs, as the involvement of more Vs increases the problem complexity. In this regard, the classification of big data into <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">strong</i> , <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">moderate</i> or <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">weak</i> level has been proposed . To evaluate the proposed methods, a set of well-known applications have been experimented and categorized, depicting a saving of up to 58% main memory and 44% disk reads, as well as prescribing lower clock rate, lesser cores, sequential programming, and non adaptive processing & storage formats. Moreover, four case studies reported as big data have been analyzed according to the proposed system. The proposed method is able to categorize two case studies as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">weak low</i> big data presenting only volume, the third case is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">weak medium</i> due to velocity, whereas in the fourth case no V is involved. Also, the proposed equations reduce the computation and human resources up to 75% of Spark cluster execution. In this manner, the proposed work can save the unnecessary investments by relevant prescriptions. Furthermore, the proposed equations can be integrated into different tools for assisting selective offloading of big data workloads to appropriate software and hardware solutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call