Abstract

How to improve the representational power of visual features extracted by deep convolutional neural networks is of crucial importance for high-quality image super-resolution. To address this issue, we propose a multi-attention augmented network, which mainly consists of content-, orientation- and position-aware modules. Specifically, we develop an attention augmented U-net structure to form the content-aware module in order to learn and combine multi-scale informative features within a large receptive field. To better reconstruct image details in different directions, we design a set of pre-defined sparse kernels to construct the orientation-aware module, which can extract more representative multi-orientation features and enhance the discriminative capacity in stacked convolutional stages. Then these extracted features are adaptively fused through channel attention mechanism. In upscale stage, the position-aware module adopts a novel self-attention to reweight the element-wise value of final low-resolution feature maps, for further suppressing the possible artifacts. Experimental results demonstrate that our method obtains better reconstruction accuracy and perceptual quality against state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call