Abstract

The proposal of federated learning solves problems of data silos and privacy protection in the field of artificial intelligence. However, privacy attacks can infer or reconstruct sensitive information from the submitted gradient, which causes users’ privacy leakage in federated learning. Secure aggregation (SecAgg) protocol can protect users’ privacy while completing federated learning tasks, but it incurs significant communication overhead and wall clock training time on large-scale model training task. Thus, it is difficult to apply SecAgg in bandwidth-limited federated applications. Recently, Rand-k sparsification with secure aggregation (Rand-k SparseSecAgg) was proposed to optimize SecAgg protocol, while its optimization of communication overhead and training time is limited. In this paper, we replace Rand-k sparsification with Top-k sparsification, and design a Top-k sparsification with secure aggregation (Top-k SparseSecAgg) protocol for privacy-preserving federated learning to further reduce communication overhead and wall clock training time. In addition, we optimize the proposed protocol by assigning clients to different groups in the logical layer, which reduces the upper limit of compression ratio and practical communication overhead in Top-k SparseSecAgg. Experiments demonstrate that Top-k SparseSecAgg can reduce communication overhead by 6.25× as compared to SecAgg, 3.78× as compared to Rand-k SparseSecAgg, and reduce wall clock training time 1.43× as compared to SecAgg and 1.13× as compared to Rand-k SparseSecAgg. Thus, our protocol is more suitable in bandwidth-limited federated applications to protect privacy and complete learning task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call