Abstract

Federated learning (FL) is a swiftly evolving field within machine learning for collaboratively training models at the network edge in a privacy-preserving fashion, without training data leaving the devices where it was generated. The privacy-preserving nature of FL shows great promise for applications with sensitive data such as healthcare, finance, and social media. However, there are barriers to real-world FL at the wireless network edge, stemming from massive wireless parallelism and the high communication costs of model transmission. The communication cost of FL is heavily impacted by the heterogeneous distribution of data across clients, and some cutting-edge works attempt to address this problem using novel client-side optimization strategies. In this article, we provide a tutorial on model training in FL, and survey the recent developments in client-side optimization and how they relate to the communication properties of FL. We then perform a set of comparison experiments on a representative subset of these strategies, gaining insights into their communication-convergence trade-offs. Finally, we highlight challenges to client-side optimization and provide suggestions for future developments in FL at the wireless edge.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.