Abstract

AbstractCloud computing revolutionizes data management by offering centralized repositories or services accessible over the Internet. These services, hosted by a single provider or distributed across multiple entities, facilitate seamless access for users and applications. Additionally, cloud technology enables federated search capabilities, allowing organizations to amalgamate data from diverse sources and perform comprehensive searches. However, such integration often leads to challenges in data quality and duplication due to structural disparities among datasets, including variations in metadata. This research presents a novel provenance‐based search model designed to enhance data quality within cloud environments. The model expands the traditional concept of a single canonical URL by incorporating provenance data, thus providing users with diverse search options. Leveraging this model, the study conducts inferential analyses to improve data accuracy and identify duplicate entries effectively. To verify the proposed model, two research paper datasets from Kaggle and DBLP repositories are utilized, and the model effectively identifies duplicates, even with partial queries. Tests demonstrate the system's ability to remove duplicates based on title or author, in both single and distributed dataset scenarios. Traditional search engines struggle with duplicate content, resulting in biased results or inefficient crawling. In contrast, this research uses provenance data to improve search capabilities, overcoming these limitations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call