Abstract

AbstractIn image super‐resolution, deep neural networks with various attention mechanisms have achieved noticeable performance in recent years, for example, channel attention and layer attention. Although many researchers have achieved good super‐resolution results with only a certain style of attention, the divergence and the complementarity focused by multiple attention mechanisms are ignored. In addition, most of these methods fail to utilize the diverse information from multi‐scale features. To efficiently manipulate the above rich information, this paper strives to combine multi‐scale structure and multi‐attention schemes in architecture and module levels for super‐resolution. Especially, in the architecture level, a fused pyramid attention network is developed to extract deep features with the multi‐scale context information from multiple different sizes of receptive field recurrently with skip connections. For the module level, a fused pyramid attention module is designed to fuse the two attention mechanisms to further refine the deep features with fine‐grained information. Compared with the common fusion strategy, the adopted feature fusion structure can maintain better structural information while establishing long‐range dependency. Extensive experimental results demonstrate that the proposed network achieves favorable performance quantitatively and visually.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call