Abstract

Federated learning (FL) has emerged as a promising distributed learning paradigm that enables a large number of mobile devices to cooperatively train a model without sharing their raw data. The iterative training process of FL incurs considerable computation and communication overhead. The workers participating in FL are usually heterogeneous and the workers with poor capabilities may become the bottleneck of model training. To address the challenges of resource overhead and system heterogeneity, this paper proposes an efficient FL framework, called FedMP, that improves both computation and communication efficiency over heterogeneous workers through adaptive model pruning. We theoretically analyze the impact of pruning ratio on training performance, and employ a Multi-Armed Bandit based online learning algorithm to adaptively determine different pruning ratios for heterogeneous workers, even without any prior knowledge of their capabilities. As a result, each worker in FedMP can train and transmit the sub-model that fits its own capabilities, accelerating the training process without hurting model accuracy. To prevent the diverse structures of pruned models from affecting the training convergence, we further present a new parameter synchronization scheme, called Residual Recovery Synchronous Parallel (R2SP). Besides, our proposed framework can be extended to the peer-to-peer (P2P) setting. Extensive experiments on physical devices demonstrate that FedMP is effective for different heterogeneous scenarios and data distributions, and can provide up to 4.1× speedup compared to the existing FL methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.