Abstract
Channel pruning is formulated as a neural architecture search (NAS) problem recently, which achieves impressive performance in model compression. However, prior arts only considered one kind of constraint (FLOPs, inference latency or model size) when pruning a neural network. This will lead to an unbalanced-pruning problem, where the FLOPs of a pruned network are under budget but the inference latency and model size are still unaffordable. Another challenge is that the supernet training process of typical NAS-based channel pruning methods is computationally expensive. To address these problems, we propose a novel Accurate and Automatic Channel Pruning (AACP) method. Firstly, we impose multiple constraints on channel pruning to address the unbalanced-pruning problem. To solve this complicated multi-objective problem, AACP proposes Improved Differential Evolution (IDE) algorithm which is more effective than typical evolutionary algorithms in searching for optimal architectures. Secondly, AACP proposes a Pruned Structure Accuracy Estimator (PSAE) which can estimate the performance of sub-networks without training a supernet and speeds up the performance estimation process. Our method achieves state-of-the-art performance on several benchmarks. On CIFAR10, our method reduces 65% FLOPs of ResNet110 with an improvement of 0.26% top-1 accuracy. On ImageNet, we reduce 42% FLOPs of ResNet50 with a small loss of 0.06% top-1 accuracy. The code is available at https://github.com/linlb11/AACP.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.