Abstract

ABSTRACT In order to solve the problems existing in multi-image super-resolution reconstruction methods, such as the difficulty of acquiring and processing multiple low-resolution images, the inability to make full use of the complementary information between different images, and the loss of details, this paper proposes a new degradation model and an improved SRGAN for multi-image super-resolution reconstruction. Firstly, a new degradation model considering both masked autoencoding and downsampling (DMMD) is designed, which can simulate the complex degradation environment in real life well and reduce the difficulty of obtaining and processing multiple low-resolution images. Then, we design a weight setting strategy for image fusion to make full use of the complementary information between different low-resolution images. To enhance the network's attention and propagation of high-frequency effective information in the feature map, the convolution block attention module (CBAM) is introduced into the generator network of SRGAN model, and a SRGAN combined with CBAM (SRGANCBAM) is designed. Finally, the fused low-resolution image is input into SRGANCBAM for reconstruction to obtain its corresponding high-resolution image. The experimental results show that our DMMD can solve the problem that it is difficult to acquire and process multiple low-resolution images. The complementary information between different images is fully utilized by the reasonable weight setting strategy. On four public datasets, the PSNR value of our SRGANCBAM can be 0.532, 0.207, 0.357 and 0.537 higher than the baseline model, respectively. Compared with the state-of-the-art methods, our SRGANCBAM achieves higher values of evaluation metrics and reconstructs clearer and more realistic images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call