Abstract

To handle the data explosion in the era of Internet-of-things, it is of interest to investigate the decentralized network, with the aim at relaxing the burden at the central server along with preserving data privacy. In this work, we develop a fully decentralized federated learning (FL) framework with an inexact stochastic parallel random walk alternating direction method of multipliers (ISPW-ADMM). Performing more efficient communication and enhanced privacy preservation compared with the current state-of-the-art, the proposed ISPW-ADMM can be partially immune to the effect of time-varying dynamic network and stochastic data collection, while still in fast convergence. Benefiting from the stochastic gradients and biased first-order moment estimation, the proposed framework can be applied to any decentralized FL tasks over time-varying graphs. Thus, to demonstrate the practicability of such a framework in providing fast convergence, high communication efficiency, noise robustness for a specific on-board mission to some extent, we study the extreme learning machine-based FL model beamforming design in unmanned aerial vehicle communications, as verified by the numerical simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call