Deep convolutional neural networks have made unprecedented achievements in image super-resolution (SR) and dominated the field due to their remarkable performance. When the degradation pattern of the test images is inconsistent with the training images, it leads to poor model performance. For example, the degradation could happen after a dimensional stretching. In this case, the most common method is to take blurry, noise, and low-resolution (LR) images and reconstructs SR images by degradation estimation. However, the SR results for this method are highly dependent on the estimation accuracy. To overcome the difficulty with the degradation estimation, this paper designs a degradation representation attention network (DRAN) for image SR. In which, we propose the use of a simple Siamese representation learning to extract the degradation information from various LR images. Specifically, DRAN distinguishes degradation information instead of performing degradation estimation, which can greatly reduce the difficulty. In other words, DRAN can avoid pixel-level operations, transform degradation computation problems into degradation classification problems and flexibly process LR images through degradation representation learning. Finally, DRAN also introduces a channel attention mechanism to enhance the performance of SR. Experimental results show that the proposed scheme can distinguish different degradation modes and obtain accurate degradation information. Meanwhile, experiments on synthetic and real images show that the DRAN achieves remarkable performance on blind SR tasks with good visual effects.
Read full abstract