Abstract

Horizontal federated learning (HFL) is a distributed framework that can be used when datasets from various participants have similar feature spaces but different sample spaces. HFL employs clustering algorithms to facilitate information sharing among cluster clients and model aggregation. However, this approach is not suitable for data with heterogeneity, and its stability cannot be guaranteed. Differing from all the current methods to address these issues, we propose a new HFL framework by employing a cluster sampling method based on a rotation mechanism. In our method, we calculate the magnitude of differences in client data by using Gaussian Model Mixture (GMM), and adaptively adjust the probability of intra-cluster clients being selected by the server as well as the number of clusters. Our numerical experiments show that the new framework can handle different data scenarios and outperforms other baselines in terms of convergence speed and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call