Abstract
Learning-based super resolution (SR) has made remarkable progress in improving image quality compared to traditional methods. However, most of these algorithms assume ideal and known image degradation processes, such as bicubic downsampling. As a result, the performance of these algorithms significantly decreases when the degradation kernel changes in low-resolution images. Therefore, blind SR networks that can estimate the degradation kernel for each image are more adaptable in realistic scenarios. Hence, how to estimate accurate degradation kernels efficiently plays an extremely important role. Nevertheless, previous blind SR designs only use image information to constrain the kernel estimation network and train the network during the inference process, resulting in limited performance and very slow runtime. In this paper, we are the first to impose constraints for the kernel estimation network in both the image domain and kernel domain to effectively optimize estimated degradation kernels. Furthermore, an efficient multi-stage network structure is leveraged to accelerate inference speed while producing high-quality kernels. The results of evaluation experiments on publicly available datasets and realistic scenarios show that, the SR network based on the proposed design can not only produce state-of-the-art high-resolution SR images but also achieve a runtime of 0.03 s for kernel estimation when a low-resolution image is enlarged by 4 times on one NVIDIA 2080Ti GPU platform.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.