Abstract

In the case of multiple degradations, current deep-learning-based gray image super-resolution (SR) methods equally process all components in an image, resulting in missing subtle details. To address this issue, we elaborate a cartoon-texture decomposition-based (CTD) module that can automatically decompose an image into a smooth cartoon component and an oscillatory texture component. The CTD module is a plug-and-play prior module that can be applied in solving imaging inverse problems. Specifically, for the SR task under multiple degradations, we apply CTD as a prior module to build an unfolding SR network termed CTDNet. For the SR task of real terahertz images, the boundary (i.e., the boundary between the object of interest and the carrier table) recovered by CTDNet has artifacts, which limits its realistic applications. To reduce these boundary artifacts, we post-process the SR terahertz images by using a boundary artifact reduction method. Experimental results on the synthetic dataset and real terahertz images demonstrate that the proposed algorithms can maintain subtle details and achieve comparable visual results. The code can be found at https://github.com/shibaoshun/CTDNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call