Abstract

In edge computing (EC), federated learning (FL) enables massive devices to collaboratively train AI models without exposing local data. In order to avoid the possible bottleneck of the parameter server (PS) architecture, we concentrate on the decentralized federated learning (DFL), which adopts peer-to-peer (P2P) communication without maintaining a global model. However, due to the intrinsic features of EC, e.g., resource limitation and heterogeneity, network dynamics and non-IID data, DFL with a fixed P2P topology and/or an identical model compression ratio for all workers results in a slow convergence rate. In this paper, we propose an efficient algorithm (termed CoCo) to accelerate DFL by integrating optimization of topology Construction and model Compression. Concretely, we adaptively construct P2P topology and determine specific compression ratios for each worker to conquer the system dynamics and heterogeneity under bandwidth constraints. To reflect how the non-IID data influence the consistency of local models in DFL, we introduce the consensus distance, i.e., the discrepancy between local models, as the quantitative metric to guide the fine-grained operations of the joint optimization. Extensive simulation results show that CoCo achieves 10× speedup, and reduces the communication cost by about 50% on average, compared with the existing DFL baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.