Single image super-resolution (SISR) is an active research topic in the fields of image processing, computer vision and pattern recognition, restoring high-frequency details and textures based on the low-resolution input image. In this paper, we aim to build more accurate and faster SISR models via developing better-performing feature extraction and fusion techniques. Firstly, we proposed a novel Orientation-Aware feature extraction/selection Module (OAM), which contains a mixture of 1D and 2D convolutional kernels (i.e., 3×1, 1×3, and 3×3) for extracting orientation-aware features. The channel attention mechanism is deployed within each OAM, performing scene-specific selection of informative outputs of the orientation-dependent kernels (e.g., horizontal, vertical, and diagonal). Secondly, we present an effective fusion architecture to progressively integrate multi-scale features extracted in different convolutional stages. Instead of directly combining low-level and high-level features, similar outputs of adjacent feature extraction modules are grouped and further compressed to generate a more concise representation of a specific convolutional stage for high-accuracy SISR task. Based on the above two important improvements, we present a compact but effective CNN-based model for high-quality SISR via Progressive Fusion of Orientation-Aware features (SISR-PF-OA). Extensive experimental results verify the superiority of the proposed SISR-PF-OA model, performing favorably against the state-of-the-art models in terms of both restoration accuracy and computational efficiency (e.g., SISR-PF-OA outperforms RCAN model, achieving higher PSNR 31.25 dB vs. 31.21 dB and using fewer FLOPs 764.41 G vs. 1020.28 G on the Manga109 dataset for scale factor ×4 SISR task.). The source codes will be made publicly available.
Read full abstract