Abstract

Data is the most valuable asset companies are proud of. When its quality degrades, the consequences are unpredictable, can lead to complete wrong insights. In Big Data context, evaluating the data quality is challenging, must be done prior to any Big data analytics by providing some data quality confidence. Given the huge data size, its fast generation, it requires mechanisms, strategies to evaluate, assess data quality in a fast, efficient way. However, checking the Quality of Big Data is a very costly process if it is applied on the entire data. In this paper, we propose an efficient data quality evaluation scheme by applying sampling strategies on Big data sets. The Sampling will reduce the data size to a representative population samples for fast quality evaluation. The evaluation targeted some data quality dimensions like completeness, consistency. The experimentations have been conducted on Sleep disorder's data set by applying Big data bootstrap sampling techniques. The results showed that the mean quality score of samples is representative for the original data, illustrate the importance of sampling to reduce computing costs when Big data quality evaluation is concerned. We applied the Quality results generated as quality proposals on the original data to increase its quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call