Abstract

Ultra-high-definition display technology is widely used in broadcasting, but there is a huge contradiction between its ultra-high-resolution content and short storage. Super-Resolution (SR) can effectively alleviate this contradiction. Recently, State-of-the-art image SR approaches leveraging Deep Convolutional Neural Networks (DCNNs) have demonstrated high-quality reconstruction performance. However, most of them suffer from large model parameters, which restricts their practical application. Besides, image SR for large scaling factors (e.g., ×8) is a tricky issue when the parameters diminish. To remedy these issues, we propose the Lightweight Multi-scale Aggregation Network (LMAN) for the image SR, which works well for both small and large scaling factors with limited parameters. Specifically, we propose a Group-wise Multi-scale Block (GMB) in which a group convolution is exploited for extracting and fusing multi-scale features before a channel attention layer to obtain discriminative features. Additionally, we present a novel Hierarchical Spatial Attention (HSA) mechanism to jointly and adaptively fuse local and global hierarchical features for high-resolution image reconstruction. Extensive experiments illustrate that our LMAN achieves superior performance against state-of-the-art methods with similar parameters and in particular for large scaling factors such as 4× and 8×.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call