Abstract

Federated learning (FL) has emerged as an extremely effective strategy for dismantling data silos and has attracted significant interest from both industry and academia in recent years. However, existing iterative FL approaches often require a large number of communication rounds and struggle to perform well on unbalanced datasets. Furthermore, the increased complexity of networks makes the application of traditional differential privacy to protect client privacy expensive. In this context, the authors introduce FedGM: a method designed to reduce communication overhead and achieve outstanding results in non-IID scenarios. FedGM is capable of achieving considerable accuracy, even with a small privacy budget. Specifically, the authors devise a method to extract knowledge from each client’s data by creating a scaled-down but highly effective synthesized dataset that can perform similarly to the original data. Additionally, the authors propose an innovative approach to applying label differential privacy to protect the synthesized dataset. The authors demonstrate the superiority of the approach over traditional methods by requiring only one communication round and by testing using four classification datasets for evaluation. Furthermore, when comparing the model performance for clients using their method against traditional solutions, the authors find that the approach achieves significant accuracy and better privacy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.