Abstract

Federated learning (FL) has emerged as an extremely effective strategy for dismantling data silos and has attracted significant interest from both industry and academia in recent years. However, existing iterative FL approaches often require a large number of communication rounds and struggle to perform well on unbalanced datasets. Furthermore, the increased complexity of networks makes the application of traditional differential privacy to protect client privacy expensive. In this context, the authors introduce FedGM: a method designed to reduce communication overhead and achieve outstanding results in non-IID scenarios. FedGM is capable of achieving considerable accuracy, even with a small privacy budget. Specifically, the authors devise a method to extract knowledge from each client’s data by creating a scaled-down but highly effective synthesized dataset that can perform similarly to the original data. Additionally, the authors propose an innovative approach to applying label differential privacy to protect the synthesized dataset. The authors demonstrate the superiority of the approach over traditional methods by requiring only one communication round and by testing using four classification datasets for evaluation. Furthermore, when comparing the model performance for clients using their method against traditional solutions, the authors find that the approach achieves significant accuracy and better privacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call