Abstract

Graph Neural Networks (GNNs) as an emerging technique have shown excellent performance in a variety of fields, such as social networks and recommendation systems. However, GNNs may have to overcome privacy concerns as large amounts of information about their training datasets may be compromised. In this paper, we develop a privacy-preserving GNN to enforce privacy preservation, which utilizes a private Functional Mechanism (FM) to train the learning model. This mechanism perturbs the polynomial approximation of the objective function to enforce Differential Privacy (DP) in the GNN model. We show that our method can maximize the accuracy of the results with comparable prediction power to the unperturbed results while satisfying the privacy guarantees.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call