Abstract

Nowadays, Online Social Network (OSN) private information leakage raises concerns. Studying the problem from both attackers' and defenders' perspectives provides valuable insights to researchers. Several novel anonymization mechanisms have been proposed, yet the innovation of de-anonymization mechanisms struggles with the complexity of graph structures and the difficulty of finding the global optimal mapping strategy. While existing de-anonymization mechanisms collect information by mapping users from the adversary's background knowledge to published data, this paper applies an inference attack based on the background knowledge and the published information. The high-level idea is learning graph properties from the published information to compensate the target area. In particular, the Generative Adversarial Network (GAN) ensures the generated graph is similar to the published data. The adversaries' background knowledge is embedded as conditional information into the GAN. Since the knowledge is embedded into the generated data through the deep learning models, it is hard to defend the proposed attacks. The evaluation of real-world OSN datasets proves that the proposed scheme de-anonymizes the edge information with high accuracy. Moreover, it also enhances the performance of existing user de-anonymization schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call