Large-scale factor image super-resolution whose scale factor is greater than 4 is significant in real-world applications for single image super-resolution. The current image super-resolution techniques for large-scale factor, however, frequently upsample low-resolution images in a single pass, leading to edge artifacts in the reconstructed images. In this article, we provide an improved asymmetric Laplacian pyramid network to further realize large-scale factor image super-resolution and fully utilize various size features. A distinct architecture is applied at each layer of the pyramid for improving feature extraction. We extend the first layer of the pyramid with a lightweight transformer design, which enables the model to efficiently collect the contextual information in the sequence by utilizing the multi-headed attention mechanism. Additionally, we combine improved dense skip connections with recursive operations to form a deep dense recursive convolutional neural network that fuses low-level and high-level features while broadening the network's receptive field. The quantitative and qualitative analysis on benchmark dataset's demonstrate that our method provides superior performance in both PSNR and SSIM and is more in line with human visual perception.
Read full abstract