Abstract

Recently, the number of layers of neural network model is deeper and deeper, the number of parameters is more and more, and the calculation scale is also larger and larger. This improves the use conditions of some excellent models, which is not conducive to the wide application of deep learning methods in more fields. In view of this trend of increasing the size of neural network models, in this paper, we optimize the structure of a convolutional neural network model for image super-resolution, which reduces the size of the model. The model structure optimization method we use is network pruning, which simplifies the number of layers and parameters of the model, improves the effect of the model and reduces the computational consumption of the model. The key insight of network pruning is to remove the relatively redundant and unimportant parts of the network to make the original network sparser and more streamlined. And the simplified model can keep the original performance. The original model used a cascade structure for multiple sampling, resulting in the increase of the scale of the neural network. By removing the redundant sampling structure and retaining only one sampling process, the number of layers of the model is reduced to 1/3 of the original. Under the same data set (BSD300) training, the PSNR (evaluation index of model effect) of the model is improved from 24.471 db to 24.490 db, and the training time is reduced by 13.8% of the original.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call