Abstract

Graph Neural Networks (GNNs) are known to address such tasks over graph-structured data, which is widely used to represent many real-world systems. The collection and analysis of graph data using GNNs raise significant privacy concerns regarding disclosing sensitive information. Existing works in privacy-preserving GNNs ensure the privacy of nodes’ features and labels. However, its structure also needs to be privatized. To address this problem, we provide a method, Local Structural Perturbation Graph Neural Network, that adds noise to the neighborhood data of the node along with its features and label. Here, we perturb the graph structure by sampling non-neighboring nodes and randomizing them along with the neighborhood. We use differentially private mechanisms to perturb the structure of graphs with theoretical guarantees. This introduces the challenge of reducing the impact of noise in the neighborhood on accuracy. In this view, we use the p -hop neighborhood to compensate for the loss of actual neighbors in randomization. We use the node and label privacy as implemented in the previous methods for privacy in GNNs. We conduct extensive experiments over real-world datasets to show the impact of perturbation on the graph structure. We also perform the theoretical analysis of our proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.