Abstract

Federated learning has been widely applied as a distributed machine learning method in various fields, allowing a global model to be trained by sharing local gradients instead of raw data. However, direct sharing of local gradients still carries the risk of privacy data leakage, and the malicious server might falsify aggregated result to disrupt model updates. To address these issues, a lot of privacy-preserving and verifiable federated learning schemes have been proposed. However, existing schemes suffer from significant computation overhead in either encryption or verification. In this paper, we present ESVFL, an efficient and secure verifiable federated learning scheme with privacy-preserving. This scheme can simultaneously achieve low computation overhead for encryption and verification on the user side. We design an efficient privacy-preserving method to encrypt the users’ local gradients. Using this method, the computation and communication overheads of encryption on the user side is independent of the number of users. Users can efficiently verify the correctness of aggregated results returned by the cloud servers using cross-verification. During the verification process, there is no interaction among users and no additional computation is required. Furthermore, we also construct an efficient method to address the issue of user dropout. When some users drop out, online users do not incur any additional computation and communication overheads, while guaranteeing the correctness of the aggregated result of online users’ encrypted gradients. The security analysis and the performance evaluation demonstrate that ESVFL is secure and can achieve efficient encryption and verification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call