Federated learning has become increasingly popular in recent years and is widely applied to various fields of machine learning. However, the localized execution of federated learning lacks visibility of the training process to a third party (e.g., the model user or auditor), which raises the need to verify the training process. Privacy concerns over the public release of sensitive data for verification purposes are a prominent issue. Therefore, enabling a public verification of the training process without revealing sensitive data is a challenge. In this paper, we focus on verifiability and privacy in federated learning and propose a verifiable and privacy-preserving federated learning scheme (VPFL). We first employ zero-knowledge proofs to allow a third party to publicly verify the training process, which enhances the transparency of the training process and the reliability of the model. Then, to further protect the privacy of sensitive data, we exploit a commitment scheme to ensure that no information about sensitive data is leaked to a third party. We conduct extensive experiments to evaluate the performance of our scheme. For federated learning with 100 clients, our scheme only takes 13.1s to generate evidence and 8.8s to verify it. In addition, we compared our scheme with other schemes, and we observed that our scheme satisfies both the security properties of verifiability and privacy.
Read full abstract