Image super-resolution (SR) is the task of inferring a high resolution (HR) image from one/multiple single low resolution (LR) input(s). Traditional networks are evaluated by pixel-level metrics such as Peak-Signal-to-Noise Ratio (PSNR) etc., which do not always align with human perception of image quality. They often produce excessively smooth images that lack high-frequency texture and appear unnatural. Therefore, in this paper, we propose a lightweight adaptive residual dense attention generative adversarial network (SRARDA) for image SR. Firstly, our generator adopts the residual in residual (RIR) structure but redesigns the basic module. By using dynamic residual connection (ARC) to dynamically adjust the importance of residual and main paths, we design a novel adaptive residual dense attention block (ARDAB) that enhances the feature extraction capability of the generator. In addition, we build a high-frequency filtering unit (HFU) to extract more high-frequency features from the LR space. Finally, to fully utilize the discriminator, we use WGAN to compute the difference between the HR image and the reconstructed image. Experiments demonstrate that SRARDA effectively addresses the issue of excessive smoothing in reconstructed images, while also enhancing visual quality.
Read full abstract