Abstract

The current privacy-preserving Graph Neural Networks (GNNs) cannot provide security and privacy guarantees against malicious adversaries without sacrificing accuracy and efficiency. For example, the Secure Multi-party Computation (MPC) can resist malicious adversaries while adding severe overhead. Trusted Execution Environment (TEE), such as Intel Software Guard Extension (SGX), can guarantee privacy and faithful execution without compromising efficiency. However, existing attacks can compromise the confidentiality of SGXs. Besides, the CPU-based structure of SGX restricts its extensibility that cannot perform collaborative computation with GPUs. To address the above issues, we propose a novel GNN training and inference framework to support data holders outsourcing their computation tasks to servers. First, we combine the advantage of MPC and the code integrity protection provided by SGXs to resist malicious adversaries without sacrificing efficiency. Second, we adopt a strategy that allows the servers to transfer the parallelizable computation task to the untrusted yet high-performance GPUs, further improving efficiency without hindering privacy. To the best of our knowledge, our proposal is the first privacy-preserving GNN framework against malicious adversaries without sacrificing accuracy and efficiency. Experiments on real-world citation datasets have demonstrated the performance of our framework regarding security, privacy, accuracy, and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call