Abstract

Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities. FL has seen successful deployment in production environments, and it has been adopted in services such as virtual keyboards, auto-completion, item recommendation, and several IoT applications. However, FL comes with the challenge of performing training over largely heterogeneous datasets, devices, and networks that are out of the control of the centralized FL server. Motivated by this inherent challenge, we aim to empirically characterize the impact of device and behavioral heterogeneity on the trained model. We conduct an extensive empirical study spanning nearly 1.5K unique configurations on five popular FL benchmarks. Our analysis shows that these sources of heterogeneity have a major impact on both model quality and fairness, causing up to 4.6× and 2.2× degradation in the quality and fairness, respectively, thus shedding light on the importance of considering heterogeneity in FL system design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call