Abstract

Recent approaches in image super-resolution suggest learning dictionary pairs to model the relationship between low-resolution and high-resolution image patches with sparsity constraints on the patch representation. Most of the previous approaches in this direction assume for simplicity that the sparse codes for a low-resolution patch are equal to those of the corresponding high-resolution patch. However, this invariance assumption is not quite accurate especially for large scaling factors where the optimal weights and indices of representative features are not fixed across the scaling transformation. In this paper, we propose an augmented coupled dictionary learning scheme that compensates for the inaccuracy of the invariance assumption. First, we learn a dictionary for the low-resolution image space. Then, we compute an augmented dictionary in the high-resolution image space where novel augmented dictionary atoms are inferred from the training error of the low-resolution dictionary. For a low-resolution test image, the sparse codes of the low-resolution patches and the lowresolution dictionary training error are combined with the trained high-resolution dictionary to produce a high-resolution image. Our experimental results compare favourably with the non-augmented scheme.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.