Abstract

Although massive pruning methods are emerging for achieving structural sparsity in convolutional neural networks (CNN), most of them target structures such as ResNet. Meanwhile, previous works take more interest in pruning filters inside a residual block and keep the shortcut connection intact, leading to an imbalanced network structure. In this paper, we focus on the penalty-based method to prune already compact networks. In contrast to the broadly used <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L_{1}$ </tex-math></inline-formula> constraint, which shrinks the parameters uniformly, we propose a novel penalty term that is similar in shape to an upside-down Laplace distribution. The penalty allows us to impose more pressure on potential weak channels but protects others during training to avoid damaging crucial channels, especially for compact architectures. We also design a candidate selection strategy to cooperate with the penalty-based training procedure. Besides, we address the residual block pruning problem by a scaling factor elimination skill, which is often ignored in other research. Our method reduces 50% parameters of MobileNet v1/v2 with a tolerable accuracy degradation. We further conduct pruning on MobileNetv1-SSDLite to compress parameters by 60%, manifesting the ability to generalize to different visual tasks. The experiment results demonstrate that our method outperforms pruning frameworks based on channel importance without a complicated tuning for hyper-parameters like search-based methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.