Abstract

Data quality improvement is an important aspect of enterprise data management. Data characteristics can change with customers, with domain and geography making data quality improvement a challenging task. Data quality improvement is often an iterative process which mainly involves writing a set of data quality rules for standardization and elimination of duplicates that are present within the data. Existing data cleansing tools require a fair amount of customization whenever moving from one customer to another and from one domain to another. In this paper, we present a data quality improvement tool which helps the data quality practitioner by showing the characteristics of the entities present in the data. The tool identifies the variants and synonyms of a given entity present in the data which is an important task for writing data quality rules for standardizing the data. We present a ripple down rule framework for maintaining data quality rules which helps in reducing the services effort for adding new rules. We also present a typical workflow of the data quality improvement process and show the usefulness of the tool at each step. We also present some experimental results and discussions on the usefulness of the tools for reducing services effort in a data quality improvement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call