Abstract

Federated learning (FL) enables clients to train a machine learning model collaboratively by just aggregating their model parameters, which makes it very useful in empowering the IoTs with intelligence. To prevent privacy information leakage from parameters during aggregation, many FL frameworks use homomorphic encryption to protect client’s parameters. However, a secure federated learning framework should not only protect privacy of the parameters but also guarantee integrity of the aggregated results. In this paper, we propose an efficient homomorphic signcryption framework that can encrypt and sign the parameters in one go. According to the additive homomorphic property of our framework, it allows aggregating the signcryptions of parameters securely. Thus, our framework can both verify the integrity of the aggregated results and protect the privacy of the parameters. Moreover, we employ the blinding technique to resist collusion attacks between internal curious clients and the server and leverage the Chinese Remainder Theorem to improve efficiency. Finally, we simulate our framework in FedML. Extensive experimental results on four benchmark datasets demonstrate that our framework can protect privacy without compromising model performance, and our framework is more efficient than similar frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call