Abstract

In recent years, significant progress has been made in image super-resolution (SR) methods based on convolutional neural networks. However, most of them do not fully utilize multi-scale feature correspondence in the image SR process, resulting in blurred and artifact detail restoration, especially for SR tasks with larger scaling factors (i.e. <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times 4$ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times 8$ </tex-math></inline-formula> ). A multi-scale feature-enhanced SR network (MFENet) is proposed to solve the problems mentioned above. Specifically, the super-resolution method is enhanced by utilizing multi-scale feature correspondence in two ways. On the one hand, we propose an effective multi-scale non-local attention module and it can fully exploit a large number of low-level features at different scales of a single low-resolution (LR) feature map by exhaustively evaluating the correlations in local feature, in-scale non-local feature, and cross-scale non-local feature. On the other hand, the mapping from high-resolution (HR) to LR images is explored by a feedback branch we introduced, that provides an additional constraint for images reconstruction. It can penalize incorrect SR predictions to achieve better SR performance. Experimental results on 5 benchmark datasets show that our method outperforms state-of-the-art methods in terms of both subjective visual quality and objective quantitative metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call