Abstract

Federated Learning (FL) improves the performance of the training phase of machine learning procedures by distributing the model training to a set of clients and recombining the final models in a server. All clients share the same model, each with a subset of the complete dataset, addressing size issues or privacy concerns. However, having a central server generates a bottleneck and weakens the failure tolerance in truly distributed environments.This work follows the line of applying consensus for FL as a no-centralized approach. Moreover, the paper presents a fully distributed consensus in MAS (multi-agent system) modeling and a new asynchronous consensus in MAS (multi-agent system). The paper also includes some descriptions and tests for implementing such learning algorithms in an actual agent platform, along with simulation results obtained in a case study about electrical production in Australian wind farms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call