Abstract

Nowadays, single image super-resolution based on deep learning algorithms is reaching state-of-the-art performance. Nevertheless, most frameworks suffer from needing huge sample set and powerful computing hardware for training. Therefore, this paper presents Cascading Parallel-structure units through a Deep-Shallow (CPDS) CNN which provides satisfactory results not only with small sample set but also low computational cost with the goal of fully using the network design capabilities. In the proposed deep-shallow CNN, an approximated image is reconstructed via the lightweight shallow network, and the missing high-frequency details are retrieved by its complementary deep network. Regarding this, deep CNN is designed based on cascading parallel-structure units in the whole parts of the model to reduce the depth of the model and the number of network parameters. Besides, in the proposed cascading strategy, feature disappearance is restricted by concatenating and fusing features. Based on the parallel-structure of deep CNN, the initial features are firstly extracted using two parallel feature extractor units. Moreover, deep CNN applies a multi-scale fashion to detect and extract coarse-to-fine features simultaneously. Finally, the strategy of combining upsampling methods of deconvolution layer and pixelshuffle is adopted to ensure the high-resolution image reconstruction is accurately validated. Experimental results demonstrate the proposed end-to-end approach outperforms most state-of-the-art methods in terms of training specifications, evaluation metrics, and visual performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call