Abstract

Federated learning facilitates the training of global models in a distributed manner without requiring the sharing of raw data. This paper introduces two novel symmetric Alternating Direction Method of Multipliers (ADMM) algorithms for federated learning, Two algorithm utilize a convex combination of current local and global variables to generate relaxed steps to improve computational efficiency. They also integrate two dual-update steps with varying relaxation factors into the ADMM framework to boost the accuracy and the convergence rate. Another key feature is the use of weak parametric assumptions to enhance its computational feasibility. Furthermore, the global update in the second algorithm occurs only at certain steps (e.g., at steps being a multiple of a pre-defined integer) to improve the communication efficiency. Theoretical analysis demonstrates linear convergence under reasonable conditions, and experimental results confirm the superior convergence and heightened efficiency of the proposed algorithms compared to the existing methodologies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.