Abstract

Graph neural networks (GNNs) have shown superior performance in learning node representation for various graph inference tasks and play a pivotal role in high-stakes decision scenarios. However, GNNs may magnify the bias in the original data and make discriminatory decisions toward individuals, reducing user trust. Some research advances have been made to improve the individual fairness of GNNs and increase trust. However, they raise privacy issues when the acquired data involves sensitive information. To address this challenge, we propose a GNN privacy protection method called local private feature-individual​ fairness graph neural network, LPF-IFGNN for node features based on local differential privacy (LDP) that can strike a balance between fairness and privacy. Specially, we propose an LDP mechanism that considers privacy issues in normalization and can compress and perturb features. In addition, we employ a convolution layer that aggregates multi-hop node features using the mean function to denoise and avoid promoting individual fairness about the perturbed features. We consider the situation where node labels are also required to be protected, and we further propose LPL-LPF-IFGNN based on LDP. Experimental results on five real-world datasets show that LPL-LPF-IFGNN outperforms the state-of-the-art fairness baseline by 41.75% on ACC and 5.02% on NDCG@10 on private data with a feature privacy budget of 1 and a label privacy budget of 0.5. Experiments further indicate that our methods can achieve a good balance between model utility and individual fairness for private node data with LDP guarantees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call