Abstract

Self-example-based super-resolution (SR) methods utilize internal dictionaries to reconstruct a high-resolution (HR) image from a single low-resolution (LR) input image. In general, a square-sized patch is used to find the LR-HR correspondences in the dictionaries. However, this may be a difficult issue because the LR input image and the dictionaries are of different scales. Inspired by this observation, we propose a novel self-example-based SR method, using context-dependent multi-shaped subpatches. Each LR input patch is segmented into multiple subpatches according to the context of the patch, enabling us to extract the better LR-HR correspondences. Our experimental results show that the proposed subpatch-based SR generates competitive high-quality HR images compared to state-of-the-art methods, with visually sharper edges that result in better visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call