Abstract
In graph embedding protection, deleting the embedding vector of a node does not completely disrupt its structural relationships. The embedding model must be retrained over the network without sensitive nodes, which incurs a waste of computation and offers no protection for ordinary users. Meanwhile, the edge perturbations do not guarantee good utility. This work proposed a new privacy protection and utility trade-off method without retraining. Firstly, since embedding distance reflects the closeness of nodes, we label and group user nodes into sensitive, near-sensitive, and ordinary regions to perform different strengths of privacy protection. The near-sensitive region can reduce the leaking risk of neighboring nodes connecting to sensitive nodes without sacrificing all of their utility. Secondly, we use mutual information to measure privacy and utility while adapting a single model-based mutual information neural estimator to vector pairs to reduce modeling and computational complexity. Thirdly, by keeping adding different noise to the divided regions and reestimating the mutual information between the original and noise-perturbed embeddings, our framework achieves a good trade-off between privacy and utility. Simulation results show that the proposed framework is superior to state-of-the-art baselines like LPPGE and DPNE.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have