Abstract
Clouds frequently affect optical remote sensing pictures throughout the gathering process, resulting in low-resolution images that affect judgment and subsequent use of ground data. Because of the thick cloud cover, the ground surface information below is entirely incorrect. This kind of end-to-end image problem should not be dismissed as a simple task of image inpainting or image translation. Therefore, this paper proposes a multi-head self-attention module based on the encoding–decoding generative adversarial network, considering the redundant information of the deep network, furthermore this paper introduces Ghost convolution to effectively solve the influence of redundant feature maps in the network on the increase of time consumption and parameters. The method in this paper can solve the problem of cloud occlusion. By considering spatial information, it can better complete the prediction of cloud removal. It can reduce the amount of network calculations and parameters while maintaining the effect. In addition, Feature Fusion Module is proposed to integrate high-level features with low-level features, so that the network can extract enough feature information and better supplement the details to complete the cloud removal. The method in this paper has achieved excellent results on the RICE1 and RICE2 datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Pattern Recognition and Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.