Abstract

Data curation is the process of acquiring multiple sources of data, assessing and improving data quality, standardizing, and integrating the data into a usable information product, and eventually disposing of the data. The research describes the building of a proof-of-concept for an unsupervised data curation process addressing a basic form of data cleansing in the form of identifying redundant records through entity resolution and spelling corrections. The novelty of the approach is to use ER as the first step using an unsupervised blocking and stop word scheme based on token frequency. A scoring matrix is used for linking unstandardized references, and an unsupervised process for evaluating linking results based on cluster entropy. The ER process is iterative, and in each iteration, the match threshold is increased. The prototype was tested on 18 fully-annotated test samples of primarily synthetic person data varied in two different ways, good data quality versus poor data quality, and a single record layout versus two different record layouts. In samples with good data quality and using both single and mixed layouts, the final clusters had an average F-measure of 0.91, precision of 0.96, and recall of 0.87 outcomes comparable to results from a supervised ER process. In samples with poor data quality whether mixed or single layout, the average F-measure was 0.78, precision 0.74, and recall 0.83 showing that data quality assessment and improvement is still a critical component of successful data curation. The results demonstrate the feasibility of building an unsupervised ER engine to support data integration for good quality references while avoiding the time and effort to standardize reference sources to a common layout, design, and test matching rules, design blocking keys, or test blocking alignment. Also, the paper proposes how unsupervised data quality improvement processes could also be incorporated into the design allowing the model to address an even broader range of data curation applications.

Highlights

  • As organizations ingest and process larger amounts of data, the time and effort it takes to prepare and integrate data into useful products are increasing, and many researchers are working to alleviate this bottleneck using several different approaches [1], [2], [3]

  • The problem is only exacerbated by Big Data [7], [8]

  • Many organizations are beginning to recognize this time and effort gap between data ingestion and final information product, and are moving to remedy this situation by increasing the level of automation in data curation processes [9]. These organizations along with software vendors and university researchers are trying to understand how to apply the same AI and machine learning (ML) techniques used for the data analytics at the end to the automation to the preceding data preparation processes [10], [11]

Read more

Summary

INTRODUCTION

As organizations ingest and process larger amounts of data, the time and effort it takes to prepare and integrate data into useful products are increasing, and many researchers are working to alleviate this bottleneck using several different approaches [1], [2], [3]. Many organizations are beginning to recognize this time and effort gap between data ingestion and final information product, and are moving to remedy this situation by increasing the level of automation in data curation processes [9]. These organizations along with software vendors and university researchers are trying to understand how to apply the same AI and ML techniques used for the data analytics at the end to the automation to the preceding data preparation processes [10], [11]. Phase III: Removal of stop words, blocking, and clustering of equivalent references (entity resolution)

Phase I - Tokenization
Phase II – Global Token Replacement
32 APARTMENTS 1
Cluster Cleaning
POC TEST SAMPLES AND RESULTS
Samples with Good Data Quality
Low Data Quality Samples
Example Results using Machine Learning for P5
CONCLUSION AND FUTURE RESEARCH
Industry Testing
Predicting Parameters and Scalability
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call