Abstract

Network pruning has been widely used to reduce the high computational cost of deep convolutional neural networks(CNNs). The dominant pruning methods, channel pruning, removes filters in layers based on their importance or sparsity training. But these methods often give limited acceleration ratio and encounter difficulties when pruning CNNs with skip connections. Block pruning methods take a sequence of consecutive layers (e.g., Conv-BN-ReLu) as a block and remove entire block each time. However, previous methods usually introduce new parameters to help pruning and lead additional parameters and extra computations. This work proposes a novel multi-granularity pruning approach that combines block pruning with channel pruning (BPCP). The block pruning (BP) module remove blocks by directly searches the redundant blocks with gradient descent and leaves no extra parameters in final models, which is friendly to hardware optimization. The channel pruning (CP) module remove redundant channels based on importance criteria and handles CNNs with skip connections properly, which further improves the overall compression ratio. As a result, for CIFAR10, BPCP reduces the number of parameters and MACs of a ResNet56 model up to 78.9% and 80.3% respectively with <3% accuracy drop. In terms of speed, it gives a 3.17 acceleration ratio. Our code has been made available at https://github.com/Pokemon-Huang/BPCP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call