Abstract
Federated learning (FL) is a promising new technology in the field of IoT intelligence. However, exchanging model-related data in FL may leak the sensitive information of participants. To address this problem, we propose a novel privacy-preserving FL framework based on an innovative chained secure multiparty computing technique, named chain-PPFL. Our scheme mainly leverages two mechanisms: 1) single-masking mechanism that protects information exchanged between participants and 2) chained-communication mechanism that enables masked information to be transferred between participants with a serial chain frame. We conduct extensive simulation-based experiments using two public data sets (MNIST and CIFAR-100) by comparing both training accuracy and leak defence with other state-of-the-art schemes. We set two data sample distributions (IID and NonIID) and three training models (CNN, MLP, and L-BFGS) in our experiments. The experimental results demonstrate that the chain-PPFL scheme can achieve practical privacy preservation (equivalent to differential privacy with $\epsilon $ approaching zero) for FL with some cost of communication and without impairing the accuracy and convergence speed of the training model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.