Abstract

Single image super-resolution (SR) is very important in many computer vision systems. However, as a highly ill-posed problem, its performance mainly relies on the prior knowledge. Among these priors, the non-local total variation (NLTV) prior is very popular and has been thoroughly studied in recent years. Nevertheless, technical challenges remain. Because NLTV only exploits a fixed non-shifted target patch in the patch search process, a lack of similar patches is inevitable in some cases. Thus, the non-local similarity cannot be fully characterized, and the effectiveness of NLTV cannot be ensured. Based on the motivation that more accurate non-local similar patches can be found by using shifted target patches, a novel multishifted similar-patch search (MSPS) strategy is proposed. With this strategy, NLTV is extended as a newly proposed super-high-dimensional NLTV (SHNLTV) prior to fully exploit the underlying non-local similarity. However, as SHNLTV is very high-dimensional, applying it directly to SR is very difficult. To solve this problem, a novel statistics-based dimension reduction strategy is proposed and then applied to SHNLTV. Thus, SHNLTV becomes a more computationally effective prior that we call adaptive high-dimensional non-local total variation (AHNLTV). In AHNLTV, a novel joint weight strategy that fully exploits the potential of the MSPS-based non-local similarity is proposed. To further boost the performance of AHNLTV, the adaptive geometric duality (AGD) prior is also incorporated. Finally, an efficient split Bregman iteration-based algorithm is developed to solve the AHNLTV-AGD-driven minimization problem. Extensive experiments validate the proposed method achieves better results than many state-of-the-art SR methods in terms of both objective and subjective qualities.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.