Abstract

As one of the most important methods of privacy computing, federated learning has attracted much attention as it makes data available but invisible (i.e., uploading gradients instead of raw data). However, adversaries may still recover some private information such as tabs, memberships or even training data, from gradients. Additionally, the malicious server may return the incorrect or forged aggregated result to clients for certain illegal interests. To ensure verifiability and privacy-preservation, in this paper, we present a verifiable secure aggregation scheme under the dual-server federated learning framework. Specifically, we combine the learning with error (LWE) cryptosystem with the secret sharing technique to guarantee the privacy of the aggregated result and each client’s local gradient. Meanwhile, we skillfully design a double-verification protocol, including the server-side and client-side verification, to efficiently verify the correctness of the aggregated result and ensure data availability. Specifically, two servers mutually verify the correctness of the aggregated result through the linear homomorphic hash technique. After passing the server-side mutual verification, the malicious server may still directly broadcast the forged aggregated result to clients. Our client-side verification protocol can ensure data availability to identify the correct aggregation result sent by the semi-trusted server. To the best of our knowledge, existing solutions do not take data availability into account. Extensive experimental comparisons with the state-of-the-art schemes demonstrate the effectiveness and efficiency of the proposed scheme in terms of accuracy, computational cost and communication overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call