Abstract

Federated learning is proposed as an approach that enables several participants to collaboratively train the machine learning model, but without directly expose the local data to others. Although federated learning is developed to prevent the participants' local sensitive data leakage. However, recent researches show that federated learning systems are still faced with privacy and security challenges. In the privacy view, if participants or the central server are curious, they can inference information about the training data or the model by the exchanged update parameters. From a security perspective, the malicious participants can make poisoning attacks during the model training phase. Differential privacy and security multiparty computation (SMC) is exploited to solve the privacy problem. But they often result in large communication overhead or low federated learning accuracy. Besides, there lacks a comprehensive scheme that has effects on both privacy and security issues. In this article, we proposed a homomorphic encryption-based privacy enhancement mechanism which has effectiveness on membership inference attacks. Our method works in two phases. In phase I, our method exploits homomorphic encryption to encrypts participants' update parameters before sending out to the aggregator. In phase II, we add a parameter selection method to the aggregator of the federated learning system to choose participants' updated information with a certain probability. The experimental results show that the proposed method effectively defends both against membership inference attacks and poisoning attacks and makes less model accuracy effects than other existing solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call