Abstract

The generative adversarial network (GAN) is successfully applied to study the perceptual single image super-resolution (SISR). However, since the GAN is data-driven, it has a fundamental limitation on restoring real high frequency information for an unknown instance (or image) during test. On the other hand, the conventional model-based methods have a superiority to achieve instance adaptation as they operate by considering the statistics of each instance (or image) only. Motivated by this, we propose a novel model-based algorithm, which can extract the detail layer of an image efficiently. The detail layer represents the high frequency information of image and it is constituted of image edges and fine textures. It is seamlessly incorporated into the GAN and serves as a prior knowledge to assist the GAN in generating more realistic details. The proposed method, named DSRGAN, takes advantages from both the model-based conventional algorithm and the data-driven deep learning network. Experimental results demonstrate that the DSRGAN outperforms the state-of-the-art SISR methods on perceptual metrics, meanwhile achieving comparable results in terms of fidelity metrics. Following the DSRGAN, it is feasible to incorporate other conventional image processing algorithms into a deep learning network to form a model-based deep SISR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call