Abstract

Graph neural networks have shown excellent performance in learning graph representations. In many cases, the graph structured data are crowd-sourced and may contain sensitive information, thus causing privacy issues. Therefore, privacy-preserving graph neural networks have spurred increasing interest nowadays. A promising approach for privacy-preserving graph neural networks is to apply local differential privacy (LDP). Though LDP provides protection against privacy attacks, the calibration of the privacy budget is not well understood and the relationship between privacy protection level and model utility is not well established. In this paper, we propose an evaluation method to characterize the trade-off between utility and privacy for locally private graph neural networks (LPGNNs). More specifically, we leverage the effect of attribute inference attacks as a privacy measurement to bridge the gaps among the model utility, privacy leakage, and the value of the privacy budget. Our experimental results show that the LPGNNs model may fulfill the promise of providing privacy protection against powerful opponents by providing poor model utility, and when it provides acceptable utility, it shows moderate vulnerability to the attribute inference attacks. Moreover, one of the direct applications of our method is visualizing the adjusting of privacy budgets and facilitating the deployment of LDP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call