Abstract

Optical remote sensing images are widely used in the fields of feature recognition, scene semantic segmentation, and others. However, the quality of remote sensing images is degraded due to the influence of various noises, which seriously affects the practical use of remote sensing images. As remote sensing images have more complex texture features than ordinary images, this will lead to the previous denoising algorithm failing to achieve the desired result. Therefore, we propose a novel remote sensing image denoising network (RSIDNet) based on a deep learning approach, which mainly consists of a multi-scale feature extraction module (MFE), multiple local skip-connected enhanced attention blocks (ECA), a global feature fusion block (GFF), and a noisy image reconstruction block (NR). The combination of these modules greatly improves the model’s use of the extracted features and increases the model’s denoising capability. Extensive experiments on synthetic Gaussian noise datasets and real noise datasets have shown that RSIDNet achieves satisfactory results. RSIDNet can improve the loss of detail information in denoised images in traditional denoising methods, retaining more of the higher-frequency components, which can have performance improvements for subsequent image processing.

Highlights

  • Remote sensing is a technology that collects information about the Earth in a noncontact way [1]

  • Optical remote sensing images have a wide range of applications in environmental monitoring [2], military target recognition [3], moving target tracking [4], and resource exploration [5]

  • We propose in this paper a novel remote sensing image denoising network (RSIDNet).It is mainly composed of a multi-scale feature extraction module (MFE), multiple local skip-connected enhanced channel attention blocks (ECA), and a global feature fusion block (GFF) composed of noise feature map reconstruction block (NR)

Read more

Summary

Introduction

Remote sensing is a technology that collects information about the Earth in a noncontact way [1]. Effective removal of random noise in remote sensing images has become a key means to improve image quality. Many methods can generate remote sensing image denoising datasets. For remote sensing image denoising, for the same training dataset, the network architecture of the model should effectively handle images containing rich and complex information, so that clean remote sensing images can be recovered without significant losses of image texture To solve these problems, we propose in this paper a novel remote sensing image denoising network (RSIDNet).It is mainly composed of a multi-scale feature extraction module (MFE), multiple local skip-connected enhanced channel attention blocks (ECA), and a global feature fusion block (GFF) composed of noise feature map reconstruction block (NR). The main contributions of this work are summarized as follows:

Methods of Remote
Traditional Methods of Remote Sensing Image Denoising
Deep Learning Methods of Remote Sensing Image Denoising
Attentional Mechanism
Network Architecture n
Role of Multi-Scale Feature Extraction Module
Loss Function
Implementation Settings
Network Hyperparameters number of enhanced channel attention blocks B and the number of feature channels c
Compare with Advanced Algorithms
Gray and Color Synthetic Noisy Remote Sensing Image
Real Noisy Remote Sensing Images
14. Results
Methods age
Ablation Experiment
15. Comparison
Summary and Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.