Abstract

To improve the reconstruction accuracy and efficiency for image super-resolution, this paper proposes a novel image super-resolution reconstruction algorithm based on generative adversarial network model with double discriminators (SRGAN-DD). For the proposed super-resolution reconstruction algorithm, we add a new discriminator based on SRGAN model, and combine the Kullback-Leibler (KL) divergence and reverse KL divergence as the uniform objective function to train such two discriminators. By using the complementary statistical characteristics from such two KL divergences, the proposed SRGAN-DD model will effectively disperse the estimated density in multiple modes, and the problem of network collapsed during reconstruction will be effectively avoided, so the robustness and efficiency of the model training is improved. For the part of model loss function design, the loss function to construct content loss by Charbonnier loss function is applied. Then, we design the perception loss and style loss by using the feature maps from middle layers of deep neural network models to achieve a combination loss function. At last, the deconvolutional operation is introduced into the network model for image reconstruction to reduce the reconstruction time complexity. To validate the feasibility and effectiveness, three groups of experiments are conducted to compare the proposed SRGAN-DD model with state-of-the-arts algorithms. Experimental results have shown that the proposed algorithm achieves the best performance on both objective and subjective judgment indicators. With the combination of loss function, the reconstructed images show less effect of artifacts and less influence of noises. The proposed SRGAN-DD model shows significant gains in perceived quality in reconstructing images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call