Abstract

Federated learning (FL) has been widely used in both academia and industry all around the world. FL has advantages from the perspective of data security, data diversity, real-time continual learning, hardware efficiency, etc. However, it brings new privacy challenges, such as membership inference attacks and data poisoning attacks, when parts of participants are not assumed to be fully honest. Moreover, selfish participants can obtain others’ collaborative data but do not contribute their real local data or even provide fake data. This violates the fairness of FL schemes. Therefore, advanced privacy and fairness techniques have been integrated into FL schemes including blockchain, differential privacy, zero-knowledge proof, etc. However, most of the existing works still have room to enhance the practicality due to our exploration. In this paper, we propose a Blockchain-based Pseudorandom Number Generation (BPNG) protocol based on Verifiable Random Functions (VRFs) to guarantee the fairness for FL schemes. Next, we further propose a Gradient Random Noise Addition (GRNA) protocol based on differential privacy and zero-knowledge proofs to protect data privacy for FL schemes. Finally, we implement both two protocols on Hyperledger Fabric and analyze their performance. Simulation experiments show that the average time that proof generation takes is 18.993 s and the average time of on-chain verification is 2.27 s under our experimental environment settings, which means the scheme is practical in reality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call