The speckle noise inherent in synthetic aperture radar (SAR) imaging has long posed a challenge for SAR data processing, significantly affecting image interpretation and recognition. Recently, deep learning-based SAR speckle removal algorithms have shown promising results. However, most existing algorithms rely on convolutional neural networks (CNN), which may struggle to effectively capture global image information and lead to texture loss. Besides, due to the different characteristics of optical images and synthetic aperture radar (SAR) images, the results of training with simulated SAR data may bring instability to the real-world SAR data denoising. To address these limitations, we propose an innovative approach that integrates swin transformer blocks into the prediction noise network of the denoising diffusion probabilistic model (DDPM). By harnessing DDPM’s robust generative capabilities and the Swin Transformer’s proficiency in extracting global features, our approach aims to suppress speckle while preserving image details and enhancing authenticity. Additionally, we employ a post-processing strategy known as pixel-shuffle down-sampling (PD) refinement to mitigate the adverse effects of training data and the training process, which rely on spatially uncorrelated noise, thereby improving its adaptability to real-world SAR image despeckling scenarios. We conducted experiments using both simulated SAR image datasets and real SAR image datasets, evaluating our algorithm from subjective and objective perspectives. The visual results demonstrate significant improvements in noise suppression and image detail restoration. The objective results demonstrate that our method obtains state-of-the-art performance, which outperforms the second-best method by an average peak signal-to-noise ratio (PSNR) of 0.93 dB and Structural Similarity Index (SSIM) of 0.03, affirming the effectiveness of our approach.
Read full abstract