Abstract

Many current convolutional neural networks are hard to meet the practical application requirement because of the enormous network parameters. For accelerating the inference speed of networks, more and more attention has been paid to network compression. Network pruning is one of the most efficient and simplest ways to compress and speed up the networks. In this paper, a pruning algorithm for the lightweight task is proposed, and a pruning strategy based on feature representation is investigated. Different from other pruning approaches, the proposed strategy is guided by the practical task and eliminates the irrelevant filters in the network. After pruning, the network is compacted to a smaller size and is easy to recover accuracy with fine-tuning. The performance of the proposed pruning algorithm is validated on the acknowledged image datasets, and the experimental results prove that the proposed algorithm is more suitable to prune the irrelevant filters for the fine-tuning dataset.

Highlights

  • In the last decades, the rapid development of deep learning is promoted, and various novel neural networks are emerging endlessly, especially the convolutional neural networks (CNNs)

  • Map-EX denotes the pruned model based on feature representation considering the receptive field expansion, which is proposed in the paper

  • Focusing on the redundant parameters and hard training problems of neural networks, a convolutional neural network pruning algorithm based on feature representations is proposed. e feature maps of the convolutions in each layer are calculated through the network iteration. e response intensity of the foreground and background features is obtained according to the feature map with the bounding box label. en the correlation between the filters and the object is bridged owing to the feature representations, which is the basis of the pruning algorithm

Read more

Summary

Introduction

The rapid development of deep learning is promoted, and various novel neural networks are emerging endlessly, especially the convolutional neural networks (CNNs). E compression methods in CNNs can be mainly classified into four categories: structure optimization, quantization and precision reduction, knowledge distillation, and network pruning. Zhou et al [11] optimized the network in training by adding the sparse constraint into the loss function and compressing the sparse matrix in convolutional layers. To find a suitable network structure for the fine-tuning dataset, a feature representation-based pruning algorithm is proposed, and the main contributions of this paper are presented as follows:. Limited figures may be included only if they are truly introductory and contain no new results

Related Works
Feature Representation-Based Pruning
Pruning in Multiple Samples
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.