Abstract

The widespread use of data makes privacy protection an urgent problem that must be addressed. Anonymity is a traditional technique that is used to protect private information. In multi-source data scenarios, if attackers have background knowledge of the data from one source, they may obtain accurate quasi-identifier (QI) values for other data sources. By analyzing the aggregated dataset, k-anonymity generalizes all or part of the QI values. Hence, some values remain unchanged. This creates new privacy disclosures for inferring other information about an individual. However, current techniques cannot address this problem. This study explores the additional privacy disclosures of aggregated datasets. We propose a new attack called a multi-source linkability attack. Subsequently, we design multi-source (k,d)-anonymity and multi-source (k,l,d)-diversity models and algorithms to protect the quasi-identifiers and sensitive attributes, respectively. We experimentally evaluate our algorithms on real datasets: that is, the Adult and Census datasets. Our work can better prevent privacy disclosures in multi-source scenarios compared to existing Incognito, Flash, Top-down, and Mondrian algorithms. The experimental results also demonstrate that our algorithms perform well regarding information loss and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call