Abstract

Federated learning (FL) is emerging as a new paradigm for training a machine learning model in cooperative networks. The model parameters are optimized collectively by large populations of interconnected devices, acting as cooperative learners that exchange local model updates with the server, rather than user data. The FL framework is however centralized, as it relies on the server for fusion of the model updates and as such it is limited by a single point of failure. In this paper we propose a distributed FL approach that performs a decentralized fusion of local model parameters by leveraging mutual cooperation between the devices and local (in-network) data operations via consensus-based methods. Communication with the server can be partially, or fully, replaced by in-network operations, scaling down the traffic load on the server as well as paving the way towards a fully serverless FL approach. This proposal also lays the groundwork for integration of FL methods within future (beyond 5G) wireless networks characterized by distributed and decentralized connectivity. The proposed algorithms are implemented and published as open source. They are also designed and verified by experimental data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call