Abstract
Recently, single image super-resolution (SISR) based on deep convolutional neural networks (CNNs) has been widely studied and achieved remarkable results. However, most of these methods mainly widen or deepen the network to get better results, ignoring the self-similarity in natural images. Some works have successfully leveraged this self-similarity by exploring non-local attention modules, but they only use non-local attention modules at one scale or two scales. And they need to calculate the self-similarity in each residual module, which makes the network more complex and takes a long time to train. We propose a multiple-scale self-similarity module (MSSS) that can be flexibly inserted into existing various super-resolution networks. MSSS uses convolution with different kernels and the trailing dilated convolution to obtain feature maps at different scales, and then calculates the correlation between multiple feature maps at different scales through two-head region-level recurrent criss-cross attention modules (TRRCCA), finally gets fusion output through channel attention. So MSSS can mine multiple-scale self-similarities. We only insert a MSSS module into the basic residual SISR network, and the result is significantly improved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.