Abstract

As a new distributed machine learning paradigm, federated learning has gained increasing attention in the industry and research community. However, federated learning is challenging to implement on edge devices with limited resources and heterogeneous data. This study aims to realize a lightweight and personalized model through pruning and masking with insufficient resources and heterogeneous data. Particularly, the server first downloads the subnetwork to the client according to the mask, and client prunes the subnetwork with the alternating direction method of multipliers (ADMM), so as to remove the unimportant parameters and reduce the cost of training and communication. At the same time, mask is used to mark the pruning condition of the model. Then, the unpruned parts and masks of local models are transmitted to the server for aggregation. The experimental results showed that the accuracy of the proposed model was improved by 9.36%, and the communication cost was reduced by 1.45 times compared with state-of-the-art models. Last but not least, we deploy flower identification models in Android Studio to illustrate the practicality of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call