Abstract

In the realm of federated learning (FL), the exchange of model data may inadvertently expose sensitive information of participants, leading to significant privacy concerns. Existing FL privacy-preserving techniques, such as differential privacy (DP) and secure multi-party computing (SMC), though offering viable solutions, face practical challenges including reduced performance and complex implementations. To overcome these hurdles, we propose a novel and pragmatic approach to privacy preservation in FL by employing localized federated updates (LF3PFL) aimed at enhancing the protection of participant data. Furthermore, this research refines the approach by incorporating cross-entropy optimization, carefully fine-tuning measurement, and improving information loss during the model training phase to enhance both model efficacy and data confidentiality. Our approach is theoretically supported and empirically validated through extensive simulations on three public datasets: CIFAR-10, Shakespeare, and MNIST. We evaluate its effectiveness by comparing training accuracy and privacy protection against state-of-the-art techniques. Our experiments, which involve five distinct local models (Simple-CNN, ModerateCNN, Lenet, VGG9, and Resnet18), provide a comprehensive assessment across a variety of scenarios. The results clearly demonstrate that LF3PFL not only maintains competitive training accuracies but also significantly improves privacy preservation, surpassing existing methods in practical applications. This balance between privacy and performance underscores the potential of localized federated updates as a key component in future FL privacy strategies, offering a scalable and effective solution to one of the most pressing challenges in FL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call