Abstract
Aiming at the problems of texture distortion and fuzzy details in the existing image super-resolution reconstruction methods, a super-resolution reconstruction network based on multi-channel attention mechanism is proposed. The texture extraction module designs an extremely lightweight multi-channel attention module in the network structure. Combined with one-dimensional convolution, it realizes cross-channel information interaction, focusing on important feature information; The texture restoration module introduces dense remaining blocks to restore some high-frequency texture details, improve the model performance, and generate high-quality reconstructed images. The proposed network can not only effectively improve the visual effect of the image, but the results on the benchmark data set CUFED5 are similar to the classic super-resolution (SRCNN) reconstruction method based on convolutional neural network. The peak signal-to-noise ratio (PSNR) and structure are similar. Degrees (SSIM) increased by 1.76dB and 0.062 respectively. Experimental results show that the designed network can improve the accuracy of texture migration and can effectively improve the quality of the generated image.
Highlights
Super-resolution reconstruction of a single image is a technique for recovering high-resolution images from lowresolution images
High-resolution images are widely used in the fields of remote sensing mapping, medical images, video surveillance, and image generation
Shi et al [7] proposed a sub-pixel convolution method, which does not require preprocessing of low-resolution images. It is directly used as the input of the network for feature extraction, and the feature map is arranged in the last layer to realize the up-sampling operation, which reduces the destruction of low-resolution image context information and makes the feature information as much as possible
Summary
Super-resolution reconstruction of a single image is a technique for recovering high-resolution images from lowresolution images. Combine the multi-channel attention mechanism with the texture search module, realize local cross-channel information interaction through one-dimensional convolution, and assign different weights to each feature channel of the input image to focus on extracting more important feature information to facilitate Feature reuse.The texture recovery module introduces dense residual blocks to improve the structure of the model, removes the batch normalization layer in the dense residual blocks, and uses residual scaling to restore some high-frequency details to produce high-quality reconstructed images. After obtaining the features from the last residual attention information extraction module X L After going through a multi-scale upsampling module, as shown in equation (11), Ftail Represents a multi-scale upsampling module, used to upsample X L to the target size X tail。 This module integrates the multi-frequency information generated by the nonlinear mapping module, and uses sub-pixel convolution to up-sample the image. The perceptual loss in this paper can be expressed by equation (8): Lper
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.