Abstract

Federated learning (FL) has gained significant attention in academia and industry due to its privacy-preserving nature. FL is a decentralized approach that allows clients to collaborate for model training by exchanging updates with a parameter server over the internet. This approach utilizes localized data and protects clients’ privacy, but it can result in high communication overhead when transmitting a high-dimensional model. This study introduces FedNISP, a federated Convolution Neural Network pruning method based on the Neuron Importance Scope Propagation (NISP) pruning strategy to reduce communication costs. In FedNISP, the importance scores of output layer neurons are back-propagated layer-wise to other neurons in the network. The central server broadcasts the pruned weights to all selected clients. Each participating client reconstructs the full model weights using the binary mask and locally trains the model on its private data. Then, the significant neurons from the locally updated model are selected using the mask and shared with the server. The server receives model updates from participating clients and reconstructs & aggregates the weights. Experiments conducted on the MNIST and CIFAR10 datasets with MNIST2NN and VGGNet models show that FedNISP outperformed Magnitude and Random pruning strategies with minimal accuracy loss while achieving a significant compression ratio.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.