Abstract

Example learning-based image super-resolution (SR) is recognized as an effective way to produce a high-resolution (HR) image with the help of an external training set. The effectiveness of learning-based SR methods, however, depends highly upon the consistency between the supporting training set and low-resolution (LR) images to be handled. To reduce the adverse effect brought by incompatible high-frequency details in the training set, we propose a single image SR approach by learning multiscale self-similarities from an LR image itself. The proposed SR approach is based upon an observation that small patches in natural images tend to redundantly repeat themselves many times both within the same scale and across different scales. To synthesize the missing details, we establish the HR-LR patch pairs using the initial LR input and its down-sampled version to capture the similarities across different scales and utilize the neighbor embedding algorithm to estimate the relationship between the LR and HR image pairs. To fully exploit the similarities across various scales inside the input LR image, we accumulate the previous resultant images as training examples for the subsequent reconstruction processes and adopt a gradual magnification scheme to upscale the LR input to the desired size step by step. In addition, to preserve sharper edges and suppress aliasing artifacts, we further apply the nonlocal means method to learn the similarity within the same scale and formulate a nonlocal prior regularization term to well pose SR estimation under a reconstruction-based SR framework. Experimental results demonstrate that the proposed method can produce compelling SR recovery both quantitatively and perceptually in comparison with other state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call