Abstract

Federated Learning (FL) is a promising machine learning paradigm to cooperatively train a global model with highly distributed data located on mobile devices. Aiming to optimize the communication efficiency for gradient aggregation and model synchronization among large-scale devices, we propose a relay-assisted FL framework. By breaking the traditional transmission-order constraint and exploiting the broadcast characteristic of relay nodes, we design a novel synchronization scheme named Partial Synchronization Parallel (PSP), in which models and gradients are transmitted simultaneously and aggregated at relay nodes, resulting in traffic reduction. We prove that PSP has the same convergence rate as the sequential synchronization approaches via rigorous analysis. To further accelerate the training process, we integrate PSP with any unbiased and error-bounded compression technologies and prove that the convergence properties of the resulting scheme still hold. Extensive experiments are conducted in a distributed cluster environment with real-world datasets and the results demonstrate that our proposed approach reduces the training time up to 37 percent compared to state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.