Abstract

A large number of parameters and computing operations of deep convolutional neural networks(CNNs) make it hinder to apply them to real-world scenarios. In this paper, we propose a channel-level pruning strategy for convolutional layers to reduce the number of parameters and computing operations in CNNs with no accuracy loss. First, we use “Squeeze-and-Excitation” block to extract the activation factors of each sample, which can be used to evaluate the importance of each channel. Second, we compute the overall weight of a specific channel by accumulating its activation factors generated by all training samples. Finally, we prune the redundant channels with low weight and thus yield compact network. On a colorectal pathology dataset, we reduce the number of channels and the convolution layer parameters by a factor of 5× and 21×, without any accuracy loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call