Abstract

Federated Learning (FL) always involves a large number of heterogeneous clients (both in statistic and resource heterogeneity) when collaboratively training a model, leading to a compromise in the model performance. Recent research has focused on customizing FL frameworks to address the issues. However, compared to models trained on independent and identically distributed (IID) data, these methods still face performance degradation in non-IID scenarios. Moreover, resource consumption is also somewhat expensive. In this work, we present an efficient FL framework named FedRich to tackle the statistic and resource heterogeneity. The key idea of FedRich is adaptive segmentation of the model and heuristic scheduling of the active clients. Adaptive segmentation enables resource-dependent customization of the model, which is conducive to clients with varying resource budgets to conduct local training. The heuristic scheduling strategy appropriately selects clients to participate in federated training, mitigating statistical heterogeneity. Moreover, FedRich incorporates a hierarchical aggregation mechanism to stably aggregate heterogeneous models of different sophistication. Extensive experimental results on three benchmark datasets demonstrate that FedRich outperforms state-of-the-art heterogeneous FL approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call