Abstract

Federated learning (FL) is a joint training pattern that fully utilizes data information whereas protecting data privacy. A key challenge in FL is statistical heterogeneity, which arises on account of the heterogeneity of local data distributions among clients, leading to inconsistency in local optimization goals and ultimately reducing the performance of globally aggregated models. We propose the Federated Ensemble Learning (FedEL), which makes full use of the heterogeneity of data distribution among clients to train a group of weak learners with diversity to construct a global model, which is a novel solution to the non-independent identical distribution (non-IID) problem. Experiments demonstrate that the proposed FedEL can improve performance in non-IID data scenarios. Even under extreme statistical heterogeneity, the average accuracy of FedEL is 3.54% higher than the state-of-the-art FL method. Moreover, the proposed FedEL reduces model storage and reasoning costs compared with traditional ensemble learning. The proposed FedEL demonstrates good generalization ability in experiments across different datasets, including natural scene image datasets and medical image datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call