Abstract

With federated learning, one of the most notable features is that it can update global model parameter without using the users’ local data. However, various security and privacy problems still exist in the process of federated learning. The problem of devising a secure and verifiable federated learning framework, so as to obtain high performance federated learning model and protect right and interests of participants has not been sufficiently studied, the malicious server may conduct dishonest data aggregation and return incorrect aggregated gradients to all the participants. What is more, the server with ulterior motives may return correct aggregated results to some participants, but return wrong results to the specific participant. To solve the above problems, we propose the SVeriFL, a successive verifiable federated learning with privacy-preserving in this work. In specific, an elaborately designed protocol based on BLS signature and multi-party security is introduced, such that the integrity of parameter uploaded by participant and correctness of aggregated results of server can be verified; the consistency of aggregation results received from server between any multiple participants can also be testified. Moreover, the CKKS approximate homomorphic encryption is used to protect data privacy of the participant. Experimental results and analyses validate the practical performance and computation efficiency of the presented SVeriFL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call