Single image super-resolution (SISR) has achieved prominent success based on deep learning. However, most SISR methods based on the specific degradation pattern, e.g., bicubic interpolation with a bulky network structure, are unsuitable for real-world images reconstructed on edge devices. To mitigate the above limitations, unlike the majority of blind super-resolution methods, which consist of a separate complex degradation kernel projection network and a SR reconstruction network, we propose an end-to-end lightweight multi-degradation oriented network that not only can estimate degradation kernel but also reconstructs HR image fast and accurately. The core components of the designed network adopt a lightweight SR model with region non-local feature similarity learning, i.e., Region Non-Local Feature Block (RNLFB), to establish region-wise global feature correlation. Concretely, more and fewer RNLFBs are used to learn more sophisticated feature representation and relatively simple degradation representation. Then, a degradation kernel projector is equipped to fulfill adaptive degradation-aware estimation from the degradation representation. With the help of the encoded degradation kernel, the reconstruction model can learn to restore a high-resolution image from the feature representation. By doing so, the degradation kernel estimation task and the HR image reconstruction task can be accomplished without designing two complex networks separately, which guides more stable and effective network training under low computational resource costs. Extensive experiments on synthetic and real-world datasets demonstrate that our method can fulfill precise degradation kernel prediction and HR image reconstruction compared with other state-of-the-art SR methods with lower model complexity.
Read full abstract