Abstract

Recent advances in efficient image super-resolution (EISR) include convolutional neural networks, which exploit distillation and aggregation strategies with copious channel split and concatenation operations to fully exploit limited hierarchical features. In contrast, the Transformer network presents a challenge for EISR because multiheaded self-attention is a computationally demanding process. To respond to this challenge, this paper proposes replacing multiheaded self-attention in the Transformer network with global filtering and recursive gated convolution. This strategy allows us to design a high-order spatial interaction and residual global filter network for efficient image super-resolution (HorSR), which comprises three components: a shallow feature extraction module, a deep feature extraction module, and a high-quality image-reconstruction module. In particular, the deep feature extraction module comprises residual global filtering and recursive gated convolution blocks. The experimental results show that the HorSR network provides state-of-the-art performance with the lowest FLOPs of existing EISR methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call