Abstract

Recent deep learning approaches in single image super-resolution (SISR) can generate high-definition textures for super-resolved (SR) images. However, they tend to hallucinate fake textures and even produce artifacts. An alternative to SISR, reference-based SR (RefSR) approaches use high-resolution (HR) reference (Ref) images to provide HR details that are missing in the low-resolution (LR) input image. We propose a novel framework that leverages existing SISR approaches and enhances them with RefSR. Specifically, we refine the output of SISR methods using neural texture transfer, where HR features are queried from the Ref images. The query is conducted by computing the similarity of textural and semantic features between the input image and the Ref images. The most similar HR features, patch-wise, to the LR image is used to augment the SR image through an augmentation network. In the case of dissimilar Ref images from the LR input image, we prevent performance degradation by including the similarity scores in the input features of the network. Furthermore, we use random texture patches during the training to condition our augmentation network to not always trust the queried texture features. Different from past RefSR approaches, our method can use arbitrary Ref images and its lower-bound performance is based on the SR image. We showcase that our method drastically improves the performance of the base SISR approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call