Abstract

This paper presents a novel learning-based method for single image super-resolution (SR). Given an input low-resolution image and its image pyramid, we propose to perform context-constrained image segmentation and construct an image segment dataset with different context categories. By learning context-specific image sparse representation, our method aims to model the relationship between the interpolated image patches and their ground truth pixel values from different context categories via support vector regression (SVR). To synthesize the final SR output, we upsample the input image by bicubic interpolation, followed by the refinement of each image patch using the SVR model learned from the associated context category. Unlike prior learning-based SR methods, our approach does not require the reoccurrence of similar image patches (within or across image scales), and we do not need to collect training low and high-resolution image data in advance either. Empirical results show that our proposed method is quantitatively and qualitatively more effective than existing interpolation or learning-based SR approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.