Federated learning has emerged as a promising paradigm by collaboratively training a global model through sharing local gradients without exposing raw data. However, the shared gradients pose a threat to privacy leakage of local data. The central server may forge the aggregated results. Besides, it is common that resource-constrained devices drop out in federated learning. To solve these problems, the existing solutions consider either only efficiency, or privacy preservation. It is still a challenge to design a verifiable and lightweight secure aggregation with drop-out resilience for large-scale federated learning. In this paper, we propose VOSA, an efficient verifiable and oblivious secure aggregation protocol for privacy-preserving federated learning. We exploit aggregator oblivious encryption to efficiently mask users' local gradients. The central server performs aggregation on the obscured gradients without revealing the privacy of local data. Meanwhile, each user can efficiently verify the correctness of the aggregated results. Moreover, VOSA adopts a dynamic group management mechanism to tolerate users' dropping out with no impact on their participation in future learning process. Security analysis shows that the VOSA can guarantee the security requirements of privacy-preserving federated learning. The extensive experimental evaluations conducted on real-world datasets demonstrate the practical performance of the proposed VOSA with high efficiency.
Read full abstract