Abstract

With the help of network compression algorithms, deep neural networks can be applied on low-power embedded systems and mobile devices such as drones, satellites, and smartphones. Filter pruning is a sub-direction of network compression research, which reduces memory and computational consumption by reducing the number of parameters of model filters. Previous works utilized the “more-simple-less-important” criterion for pruning filters. That is, filters with the smaller norm or more sparse weights in the network are preferentially pruned. In this paper, we found that feature maps are not fully positively correlated with the sparsity of filter weights by observing the visualization of feature maps and the corresponding filters. Hence, we came up with the idea that the priority of filter pruning should be determined by redundancy rather than sparsity. The redundancy of a filter is the measure of whether the output of the filter is repeated with other filters. Based on this, we defined a criterion called redundancy index to rank the filters and introduced it into our filter pruning strategy. Extensive experiments demonstrate the effectiveness of our approach on different model architectures, including VGGNet, GoogleNet, DenseNet, and ResNet. The models compressed with our strategy surpass the state-of-the-art in terms of Floating Point Operations Per Second (FLOPs), parameters reduction, and classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call