Abstract
Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.