Abstract

Federated Learning (FL), which enables multiple clients to cooperatively train global models without revealing private data, has gained significant attention from researchers in recent years. However, the data samples on each participating device in FL are often not independent and identically distributed (IID), leading to significant statistical heterogeneity challenges. In this paper, we propose FL-Enhance, a novel framework to address the non-IID-ness data issue in FL by leveraging established solutions such as data selection, data compression, and data augmentation. FL-Enhance, specifically, utilizes cGANs that are trained locally on the server level, which represents a relatively novel approach within the FL framework. Also, data compression techniques are applied to preserve privacy during data sharing between clients and servers. We compare our framework with the commonly used SMOTE data augmentation technique and test it with different FL algorithms, including FedAvg, FedNova, and FedOpt. We conducted experiments using both image and tabular data to evaluate the effectiveness of our proposed framework. The experimental findings show that FL-Enhance can substantially enhance the performance of the trained models in situations of severe pathological clients while still preserving privacy, which is the fundamental requirement in the FL context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call