Abstract

Federated Learning (FL) is a distributed form of training the machine learning and deep learning models on the data spread over heterogeneous edge devices. The global model at the server learns by aggregating local models sent by the edge devices, maintaining data privacy, and lowering communication costs by just communicating model updates. The edge devices on which the model gets trained usually will have limitations towards power resource, storage, computations to train the model. This paper address the computation overhead issue on the edge devices by presenting a new method named FedPruNet, which trains the model in edge devices using the neural network model pruning method. The proposed method successfully reduced the computation overhead on edge devices by pruning the model. Experimental results show that for the fixed number of communication rounds, the model parameters are pruned up to 41.35% and 65% on MNIST and CIFAR-10 datasets, respectively, without compromising the accuracy compared to training FL edge devices without pruning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call