Abstract

Existing approaches for SAR image registration focus on the global transformation correction between SAR images. However, there are often local deformations between images. Due to the time-changing viewpoint of video SAR, the images suffer a lot from local deformations, which can result in false alarms in moving target detection. This article presents an unsupervised image registration approach for the use of video SAR moving target detection, which has good registration performance and acceptable processing efficiency. The designed unsupervised learning-based framework is a cascade of two convolutional neural networks. The first network directly predicts the parameters of the rigid transformation between the reference and unregistered images, and recovers the global transformation between them. Then, the second network uses the reference image and the registered image from the first network as input and then predicts a displacement field. After that, we put a limitation on the predicted displacement field to prevent moving target shadows from being aligned. Finally, the displacement field with limitation is used to compensate local deformations between the two images. Processing results of real video SAR images have shown good performance of the proposed approach with convincing generation ability.

Highlights

  • V IDEO synthetic aperture radar (SAR) has received a lot of research attention [1]–[3] recently, which provides a persistent view of a scene of interest by forming high frame rate sequential images [4]

  • Some methods have been developed for moving target detection in video SAR [8], [9], which use the information contained in successive frames

  • Image registration is always used to compensate for background change between the frames [8], [9], which plays an important role in the video SAR moving target detection

Read more

Summary

INTRODUCTION

V IDEO synthetic aperture radar (SAR) has received a lot of research attention [1]–[3] recently, which provides a persistent view of a scene of interest by forming high frame rate sequential images [4]. Most of the conventional registration methods just recover the global transformation between SAR images by estimating the parameters of a transformation model, such as rigid transformation, similarity transformation, or affine transformation. Some unsupervised CNN-based methods have been developed for image registration [24]–[26] They estimate the deformation filed between images by optimizing an image similarity often combined with a smoothing constraint. We can maximize an image similarity to estimate the deformations between video SAR images via a displacement field. Different from the conventional image registration methods, we use the CNN to estimate the transformation model parameters directly. Based on the displacement field, we obtain the image after fine registration If by wrapping the image Ip to the reference image Ir

Global Transformation Correction
Local Deformation Compensation
EXPERIMENTAL RESULTS
Datasets and Training Strategy
Evaluation Metrics
Registration Results
Ablation Study
Generalization Ability
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call