Abstract

Many convolutional neural networks(CNNs) have been proposed to solve computer vision tasks such as image classification and image segmentation. However the CNNs usually contain a large number of parameters to determine which consumes very high computation and power resources. Thus, it is difficult to deploy the CNNs on resource-limited devices. Network pruning and network quantization are two main methods to compress the CNNs, researchers often apply these methods individually without considering the relationship between them. In this paper, we explore the coupling relationship between network pruning and quantization, as well as the limits of the current network compression training method. Then we propose a new regularized training method that can combine pruning and quantization within a simple training framework. Experiments show that by using the proposed training framework, the finetune process is not needed anymore and hence we can reduce much time for training a network. The simulation results also show that the performance of the network can over-perform the traditional methods. The proposed framework is suitable for the CNNs deployed in portable devices with limited computational resources and power supply.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call