Abstract

In recent years, deep convolutional neural networks with multi-scale features have been widely used in image super-resolution reconstruction (ISR), and the quality of the generated images has been significantly improved compared with traditional methods. However, in current image super-resolution network algorithms, these methods need to be further explored in terms of the effective fusion of multi-scale features and cross-domain application of attention mechanisms. To address these issues, we propose a novel multi-scale cross-attention fusion network (MCFN), which optimizes the feature extraction and fusion process in structural design and modular innovation. In order to make better use of the attention mechanism, we propose a Pyramid Multi-scale Module (PMM) to extract multi-scale information by cascading. This PMM is introduced in MCFN and is mainly constructed by multiple multi-scale cross-attention modules (MTMs). To fuse the feature information of PMMs efficiently in both channel and spatial dimensions, we propose the cross-attention fusion module (CFM). In addition, an improved integrated attention enhancement module (IAEM) is inserted at the network’s end to enhance the correlation of high-frequency feature information between layers. Experimental results show that the algorithm significantly improves the reconstructed images’ edge information and texture details, and the benchmark dataset’s performance evaluation shows comparable performance to current state-of-the-art techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call