Abstract

In this paper, we propose a novel single image super-resolution (SR) method based on low-rank sparse representation with self-similarity learning. Sparse representation is known as a promising method for SR. However, the sparse codes for low resolution (LR) patches gained by conventional method are not faithful to those for the original high resolution (HR) ones. To overcome this defect, we explore the structures of sparse representation for nonlocal similar patches in natural images by low-rank strategy. It assumes that the sparse codes for nonlocal similar patches should be low-rank. By low-rank constraint, similar components of sparse codes are shared and coding noises are removed, which improves coding accuracy and SR performance. Furthermore, we utilize self-similarity learning framework to generate a self-examples dictionary compatible to the low-rank sparse representation based SR. Experimental results demonstrate that our proposed method can recover good SR results both quantitatively and perceptually.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call