Abstract

As a new distributed machine learning framework for privacy protection, Federated Learning (FL) enables substantial Internet of Things (IoT) devices (e.g., mobile phones, tablets, etc.) to participate in collaborative training of a machine learning model. FL can protect the data privacy of IoT devices without exposing their raw data. However, the diversity of IoT devices may degrade the overall training process due to the straggler issue. To tackle this problem, we propose a gear-based asynchronous federated learning (AsyFed) architecture. It adds a gear layer between the clients and the FL server as a mediator to store the model parameters. The key insight is that we group these clients with similar training abilities into the same gear. The clients within the same gear conduct synchronous training. These gears then communicate with the global FL server asynchronously. Besides, we propose a T-step mechanism to reduce the weight from the slow gear when they are communicating with the FL server. Extensive experiment evaluations indicate that AsyFed outperforms FedAvg (baseline synchronous FL scheme) and some state-of-the-art asynchronous FL methods in terms of training accuracy or speed under different data distributions. The only negligible overhead is that we leverage the extra layer (gear layer) to preserve part of the model parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call