Abstract

Federated learning, as a distributed machine learning framework that can protect data privacy, has recently received increasing attention. In federated learning, a shared global model is obtained through parameter interaction, which leads to frequent parameter communication during the training process. The limited bandwidth of IoT and edge devices that deploy federated learning further affects communication and learning efficiency. In this paper, an enhanced federated learning technique is presented by proposing a feature-aligned filter selection method. Besides, it is believed that the training gap between the global model and the local model on each node should be focused on during the training process. Then, we can select the local contribution parameters to improve the communication efficiency while ensuring the performance of the shared global model. Therefore, the Geometric Median of each layer in the global model is adopted as the criterion to select important filters in the local model, and then the parameters interact with other nodes to achieve efficient communication. The results under a variety of experimental settings demonstrate that our proposed federated learning scheme can effectively enhance communication efficiency and ensure the performance of the global model. Compared with the state-of-the-art methods, a maximum of 6.5× improvements in communication efficiency can be obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call