The advancements in convolutional neural networks have led to significant progress in image super-resolution (SR) techniques. Nevertheless, it is crucial to acknowledge that current SR methods operate under the assumption of bicubic downsampling as a degradation factor in low-resolution (LR) images and train models accordingly. However, this approach does not account for the unknown degradation patterns present in real-world scenes. To address this problem, we propose an efficient degradation representation learning network (EDRLN). Specifically, we adopt a contrast learning approach, which enables the model to distinguish and learn various degradation representations in realistic images to obtain critical degradation information. We also introduce streamlined and efficient pixel attention to strengthen the feature extraction capability of the model. In addition, we optimize our model with mutual affine convolution layers instead of ordinary convolution layers to make it more lightweight while minimizing performance loss. Experimental results on remote sensing and benchmark datasets show that our proposed EDRLN exhibits good performance for different degradation scenarios, while the lightweight version minimizes the performance loss as much as possible. The Code will be available at: https://github.com/Leilei11111/EDRLN.
Read full abstract