Abstract
Abstract Model pruning, as an optimisation algorithm for convolutional neural network (CNN) models, can effectively compress and accelerate various complex CNN models. Recent network pruning algorithms usually focus on removing unimportant and redundant convolutional kernels or channels in the network. This paper propose a hierarchical pruning algorithm, HCPrune, which classifies and then prunes the convolutional layers of a CNN model to improve the model prediction accuracy while reducing the number of parameters and FLOPs. In this paper, the CNN convolutional layers are divided in terms of the size relationship between the receptive field of each layer of the network and the resolution of the input image, and use the VGG and Resnet series of networks based on some popular pruning methods to conduct experiments on the CIFAR10 and CIFAR100 datasets and ensure that the same number of parameters and FLOPs compared with the basic pruning methods. The VGG16 network improves the accuracy by 0.42% in CIFAR10 and 0.47% in CIFAR100, and the ResNet164 improves the accuracy by about 0.2% in the same configuration.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have