Abstract
ABSTRACT Stripe noise often affects remote sensing images, leading to the degradation of imaging quality and impacting subsequent image processing. Deep learning methods have made remarkable advancements in removing stripes from remote sensing images with their powerful feature extraction capability. However, it is noteworthy that these methods are still insufficient in analysing structural characteristics in the direction of stripes and multi-scale contextual information. To deal with the above issues, we propose a remote sensing image destriping with a two-stage image decomposition network, named TSIDNet, which utilizes the stripe structural characteristics to effectively remove stripe noise while retaining better details. Firstly, stripes are directional, and the structural information of the stripes is mainly concentrated in the image after the differential decomposition of the direction along the stripes. Therefore, a differential decomposition stripe extraction subnetwork (DDSESN) is constructed to generate latent images. Within this subnetwork, we further design a multi-scale cross-fusion residual block (MCRB) and cross-scale fusion attention block (CSFAB) to gradually expand the network’s receptive field, which is conducive to extract and remove stripes more completely. Secondly, the features obtained by adding the latent clear image with details extracted from the stripe’s reverse direction are input into a DWT decomposition detail enhancement subnetwork (DDDESN), which utilizes the discrete wavelet transform (DWT) and the dense residual network for detail enhancement. Demonstrated through experiments, the proposed TSIDNet enables the removal of image stripes while preserving details, and surpassing the comparative methods in qualitative as well as quantitative evaluations. The code will be provided after acceptance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have