Abstract

Convolutional neural network (CNN), as one of the most important pillars of deep learning technology, has been paid increased attention to by researchers. However, by the expansion of the application of convolutional neural network, it faces more and more complex problems and more and more diverse situations. In order to solve these problems more appropriately, the number of parameters of convolutional neural networks continuously increases. This limits its potential to be deployed to distal devices with relatively low computing power and memory space. To improve this situation, this paper studies the compression of convolutional neural network model with acceptable sacrifice of accuracy. Based on Pytorch, Resnet-56 on CIFAR-10 is pruned. By using HRank pruning method, convolution kernels of each convolution layers are ordered according to the size of their determinant rank. Through deleting low-rank, less importance convolution kernels and reserving high-rank, more important convolution kernels, the pruning purpose is achieved. By changing the default compression ratio of different convolution layer, more important ones are applied with lower compression rates and less important ones are applied with higher compression rates, the paper achieves higher compression rate with the acceptable loss of accuracy. At the same time, the paper tests the efficiency of pruning under different learning rates. Finally, the paper finds that when the preset compression rate of the intermediate convolutional layer is changed slightly, the accuracy maintains at the level of 92.720% but the compress rate of Params reaches to 44.7% with 0.47M left and the compress rate of Flops reaches 51.1% with 61.39M left. In addition, the paper finds that when the learning rate is 0.05, the learning efficiency reaches its optimum.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.