Abstract

Federated learning framework facilitates more applications of deep learning algorithms on the existing network architectures, where the model parameters are aggregated in a centralized manner. However, some of federated learning participants are often inaccessible, such as in a power shortage or dormant state. That will force us to explore the possibility that the parameter aggregation is operated in an ad hoc manner, which is based on consensus computing. On the contrary, since caching mechanism is indispensable to any federated learning mobile node, it is necessary to investigate the connection between it and consensus computing. In this article, we first propose a novel federated learning paradigm, which supports an ad hoc operation mode for federated learning participants. Second, a discrete-time dynamic equation and its control law are formulated to satisfy the demands from federated learning framework, with a quantized caching scheme designed to mask the uncertainties from both asynchronous updates and measurement noises. Then, the consensus conditions and the convergence of the consensus protocol are deduced analytically, and a quantized caching strategy to optimize the convergence speed is provided. Our major contribution is to give the basic theories of distributed consensus problem for federated learning framework, and the theoretical results are validated by numerical simulations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.