Abstract

Recent research has demonstrated that the performance of sparse representation based methods for single image super-resolution (SISR) reconstruction relies strongly on the degree of accuracy of sparse coding coefficients, and accordingly several more accurate models have been developed to overcome it by exploiting the nonlocal patch redundancy within the observed image. However, the capability of those models may be limited as they fail to simultaneously consider the redundant information both within the same scale and across multiple scales. Thus, in this paper, an improved SISR reconstruction method is proposed, in which a compensative pair of regularization terms defined by l1-norm is first constructed by taking advantage of the multiscale self-similarity. Then the calculated sparse coefficients are further aligned to this pair of standards in order to suppress sparse coding noise, and consequently result in more faithful recoveries. Finally, based on conventional iterative shrinkage-thresholding algorithm, a local-to-global and coarse-to-fine mathematic implementation is established to solve the proposed model effectively. Extensive experiments on both synthetic and real images demonstrate that our proposed method leads to a promising SISR performance and surpasses the recently published counterparts in terms of both objective evaluation and visual perception.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.