Abstract

Filter pruning is an effective method for reducing the size of convolutional neural networks without sacrificing performance. Most filter pruning methods prioritize filters with high information content, but fail to consider that filters with low information content might capture essential features. Moreover, we have discovered that the distinctions among feature maps generated by filters can identify crucial features. Based on this insight, we propose a novel pruning method called inTer-feAture distinctIon fiLter fusiOn pRuning (TAILOR), which fuses the feature distinctions between filters. TAILOR randomly selects multiple filter sets within a convolutional layer and calculates the output feature maps of the next convolutional layer of these sets. Subsequently, an intelligent distinction optimization scheme is proposed to obtain the optimal filter set for filter pruning, which supplants the original convolutional layer. Experimental results indicated that the inter-feature distinctions among filters significantly affect filter pruning. TAILOR outperforms state-of-the-art filter-pruning methods in terms of model prediction accuracy, floating-point operations, and parameter scale. For instance, with VGG-16, TAILOR achieves a 73.89% FLOPs reduction by removing 91.85% of the parameters, while improving accuracy by 0.36% on the CIFAR-10.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call