Abstract

The learning process takes place inside clients in federated learning (FL). How to effectively motivate clients and avoid the impact of statistical heterogeneity are challenges in FL. This article proposes contribution- and participation-based federated learning (CPFL) to address these challenges. CPFL can effectively allocate client incentives and aggregate models according to client contribution ratios, by which it can reduce the impact of heterogeneous data. To get effective and approximately fair client contributions faster, we propose an extended Raiffa solution (ERS). Compared to the conventional solution Shapley value, the time complexity of ERS goes from $\mathscr {O}(2^{n})$O(2n) down to $\mathscr {O}(n)$O(n). We perform extensive experiments with the MNIST/EMNIST datasets, heterogeneous datasets, and with different ratios of participation reward. Experimental results demonstrate that CPFL generally has a better learning effect in the heterogeneous case.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call