Abstract

Collaborative learning and related techniques such as federated learning, allow multiple clients to train a model jointly while keeping their datasets at local. Secure Aggregation in most existing works focus on protecting model gradients from the server. However, an dishonest user could still easily get the privacy information from the other users. It remains a challenge to propose an effective solution to prevent information leakage against dishonest users.To tackle this challenge, we propose a novel and effective privacy-preserving collaborative machine learning scheme, targeting at preventing information leakage agains adversaries. Specifically, we first propose a privacy-preserving network transformation method by utilizing Random-Permutation in Software Guard Extensions(SGX), which protects the model parameters from being inferred by a curious server and dishonest clients. Then, we apply Partial-Random Uploading mechanism to mitigate the information inference through visualizations. To further enhance the efficiency, we introduce network pruning operation and employ it to accelerate the convergence of training. We present the formal security analysis to demonstrate that our proposed scheme can preserve privacy while ensuring the convergence and accuracy of secure aggregation. We conduct experiments to show the performance of our solution in terms of accuracy and efficiency. The experimental results show that the proposed scheme is practical.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call