Abstract
Image super-resolution (SR) is a fundamental technique in the field of image processing and computer vision. Recently, deep learning has witnessed remarkable progress in many super-resolution approaches. However, we observe that most studies focus on designing deeper and wider architectures to improve the quality of image SR at the cost of computational burden and speed. Few researches adopt lightweight but effective modules to improve the efficiency of SR without compromising its performance. In this paper, we propose the Wavelet-based residual attention network (WRAN) for image SR. Specifically, the input and label of our network are four coefficients generated by the two-dimensional (2D) Wavelet transform, which reduces the training difficulty of our network by explicitly separating low-frequency and high-frequency details into four channels. We propose the multi-kernel convolutional layers as basic modules in our network, which can adaptively aggregate features from various sized receptive fields. We adopt the residual attention block (RAB) that contains channel attention and spatial attention modules. Thus, our method can focus on more crucial underlying patterns in both channel and spatial dimensions in a lightweight manner. Extensive experiments validate that our WRAN is computationally efficient and demonstrate competitive results against state-of-the-art SR methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.