Abstract

The success of machine learning (ML) techniques in the formerly difficult areas of data analysis and pattern extraction has led to their widespread incorporation into various aspects of human life. This success is due in part to the increasing computational power of computers and in part to the improved ability of ML algorithms to process large amounts of data in various forms. Despite these improvements, certain issues, such as privacy, continue to hinder the development of this field. In this context, a privacy-preserving, distributed, and collaborative machine learning technique called federated learning (FL) has emerged. The core idea of this technique is that, unlike traditional machine learning, user data is not collected on a central server. Nevertheless, models are sent to clients to be trained locally, and then only the models themselves, without associated data, are sent back to the server to combine the different locally trained models into a single global model. In this respect, the aggregation algorithms play a crucial role in the federated learning process, as they are responsible for integrating the knowledge of the participating clients, by integrating the locally trained models to train a global one. To this end, this paper explores and investigates several federated learning aggregation strategies and algorithms. At the beginning, a brief summary of federated learning is given so that the context of an aggregation algorithm within a FL system can be understood. This is followed by an explanation of aggregation strategies and a discussion of current aggregation algorithms implementations, highlighting the unique value that each brings to the knowledge. Finally, limitations and possible future directions are described to help future researchers determine the best place to begin their own investigations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call