Abstract

The existing methods of remote sensing image super-resolution reconstruction based on deep learning have some problems, such as insufficient feature extraction abilities, blurred image edges, and difficult model training. To solve these problems, a super-resolution reconstruction method combining residual channel attention (CA) is proposed. Based on the framework of generative adversarial networks, the residual structure is designed to enhance the ability of deep feature extraction ability for remote sensing images. The CA module is added to extract the deep feature information of remote sensing images, and the shallow features and deep features are fused using the skip connection. The perceptual loss function is combined with the loss function represented by the Wasserstein distance to improve the stability of model training. The experimental results show that this method is superior to the comparison algorithms in the objective evaluation criteria of the peak-signal-to-noise ratio and structural similarity of the reconstructed remote sensing images. After optimizing the model training process, the reconstructed remote sensing images are visually clearer and have sharper edges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.