Abstract

Network pruning offers an opportunity to facilitate deploying convolutional neural networks (CNNs) on resource-limited embedded devices. Pruning more redundant network structures while ensuring network accuracy is challenging. Most existing CNN compression methods iteratively prune the “least important” filters and retrain the pruned network layer-by-layer, which may lead to a sub-optimal solution. In this paper, an end-to-end structured network pruning method based on adversarial multi-indicator architecture selection (AMAS) is presented. The pruning is implemented by striving to align the output of the baseline network with the output of the pruned network in a generative adversarial framework. Furthermore, to efficiently find optimal pruned architecture under constrained resources, an adversarial fine-tuning network selection strategy is designed, in which two contradictory indicators, namely pruned channel number and network classification accuracy, are considered. Experiments on SVHN show that AMAS reduces 75.37% of FLOPs and 74.42% of parameters with even 0.36% accuracy improvement for ResNet-110. On CIFAR-10, it achieves a reduction of 77.08% FLOPs and removes 73.98% of parameters with negligible accuracy cost for GoogLeNet. In particular, it obtains a 56.87% pruned rate in FLOPs and 59.18% parameters reduction, while with an increase of 0.49% accuracy for ResNet-110, which significantly outperforms state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call