Abstract

SummarySocial networks collect enormous amounts of user personal and behavioral data, which could threaten users' privacy if published or shared directly. Privacy‐preserving graph publishing (PPGP) can make user data available while protecting private information. For this purpose, in PPGP, anonymization methods like perturbation and generalization are commonly used. However, traditional anonymization methods are challenging in balancing high‐level privacy and utility, ineffective at defending against both various link and hybrid inference attacks, as well as vulnerable to graph neural network (GNN)‐based attacks. To solve those problems, we present a novel privacy‐disentangled approach that disentangles private and non‐private information for a better privacy‐utility trade‐off. Moreover, we propose a unified graph deep learning framework for PPGP, denoted privacy‐disentangled variational information bottleneck (PDVIB). Using low‐dimensional perturbations, the model generates an anonymized graph to defend against various inference attacks, including GNN‐based attacks. Particularly, the model fits various privacy settings by employing adjustable perturbations at the node level. With three real‐world datasets, PDVIB is demonstrated to generate robust anonymous graphs that defend against various privacy inference attacks while maintaining the utility of non‐private information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call