Abstract

It has long been recognized that synthetic aperture radar (SAR) images suffer in many applications from the speckle noise. Video SAR has a high frame rate of imaging and contains redundant information among frames. The temporal redundancy in video SAR images has been found useful for suppressing the speckle noise. However, the motion and the local differences between frames make it difficult to employ the temporal redundancy to suppress speckle noise. This article presents a video SAR image despeckling framework based on a new unsupervised training strategy referred to as DualNoise2Noise. This developed framework consists of a registration network and a denoising network. The registration network first compensates for the motion between two adjacent frames in video SAR in real time. After image registration, two adjacent frames with random speckle noise can be considered as the observations of the same region with local differences. The denoising network adopts the DualNoise2Noise training strategy to suppress speckle noise by using the temporal redundancy and to remove the negative impact caused by the local differences. The proposed approach has been used to process the real video SAR data, and the experimental results are convincing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.