Abstract

The existing methods of remote sensing image super-resolution reconstruction based on deep learning have some problems, such as insufficient feature extraction abilities, blurred image edges, and difficult model training. To solve these problems, a super-resolution reconstruction method combining residual channel attention (CA) is proposed. Based on the framework of generative adversarial networks, the residual structure is designed to enhance the ability of deep feature extraction ability for remote sensing images. The CA module is added to extract the deep feature information of remote sensing images, and the shallow features and deep features are fused using the skip connection. The perceptual loss function is combined with the loss function represented by the Wasserstein distance to improve the stability of model training. The experimental results show that this method is superior to the comparison algorithms in the objective evaluation criteria of the peak-signal-to-noise ratio and structural similarity of the reconstructed remote sensing images. After optimizing the model training process, the reconstructed remote sensing images are visually clearer and have sharper edges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call