In recent years, deep neural networks have achieved remarkable successes in many pattern recognition tasks. However, the high computational cost and large memory overhead hinder them from applications on resource-limited devices. To address this problem, many deep network acceleration and compression methods have been proposed. One group of methods adopt decomposition and pruning techniques to accelerate and compress a pre-trained model. Another group designs single compact unit to stack their own networks. These methods are subject to complicated training processes, or lack of generality and extensibility. In this paper, we propose a general framework of architecture distillation, namely LightweightNet, to accelerate and compress convolutional neural networks. Rather than compressing a pre-trained model, we directly construct the lightweight network based on a baseline network architecture. The LightweightNet, designed based on a comprehensive analysis of the network architecture, consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. Specifically, we propose the strategy of low-dimensional features of fully-connected layers for substantial memory saving, and design multiple efficient compact blocks to distill convolutional layers of baseline network with accuracy-sensitive distillation rule for notable time saving. Finally, it can effectively reduce the computational cost and the model size by >4× with negligible accuracy loss. Benchmarks on MNIST, CIFAR-10, ImageNet and HCCR (handwritten Chinese character recognition) datasets demonstrate the advantages of the proposed framework in terms of speed, performance, storage and training process. In HCCR, our method even outperforms traditional handcrafted features-based classifiers in terms of speed and storage while maintaining state-of-the-art recognition performance.
Read full abstract