Abstract

Federated learning (FL) ameliorates privacy concerns in settings where a central server coordinates learning from data distributed across many clients; rather than sharing the data, the clients train locally and report the models they learn to the server. Aggregation of local models requires communicating massive amounts of information between the clients and the server, consuming network bandwidth. We propose a novel framework for updating the global model in communication-constrained FL systems by requesting input only from the clients with informative updates, and estimating the local updates that are not communicated. Specifically, describing the progression of the model’s weights by an Ornstein–Uhlenbeck process allows us to develop sampling strategy for selecting a subset of clients with significant weight updates; model updates of the clients not selected for communication are replaced by their estimates. We test this policy on realistic federated benchmark datasets and show that the proposed framework provides up to 50% reduction in communication while maintaining competitive or achieving superior performance compared to baselines. The proposed method represents a new line of strategies for communication-efficient FL that is orthogonal to the existing user-driven techniques, such as compression, thus complementing rather than aiming to replace those existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.