Abstract

Deep learning has obtained remarkable achievements in computer vision, especially image and video processing. However, in synthetic aperture radar (SAR) image recognition, the application of DNNs is usually restricted due to data insufficiency. To augment datasets, generative adversarial networks (GANs) are usually used to generate numerous photo-realistic SAR images. Although there are many pixel-level metrics to measure GAN’s performance from the quality of generated SAR images, there are few measurements to evaluate whether the generated SAR images include the most representative features of the target. In this case, the classifier probably categorizes a SAR image into the corresponding class based on “wrong” criterion, i.e., “Clever Hans”. In this paper, local interpretable model-agnostic explanation (LIME) is innovatively utilized to evaluate whether a generated SAR image possessed the most representative features of a specific kind of target. Firstly, LIME is used to visualize positive contributions of the input SAR image to the correct prediction of the classifier. Subsequently, these representative SAR images can be selected handily by evaluating how much the positive contribution region matches the target. Experimental results demonstrate that the proposed method can ally “Clever Hans” phenomenon greatly caused by the spurious relationship between generated SAR images and the corresponding classes.

Highlights

  • Synthetic aperture radar (SAR) can realize full-time observation and obtain highresolution SAR images without being restricted by weather and light, it is widely applied in both civil and military fields [1]

  • moving and stationary target acquisition and recognition (MSTAR) is a dataset composed of ten classes of real measured SAR images of ground stationary vehicles

  • local interpretable model-agnostic explanation (LIME) perform in comparison to Class activation mapping (CAM)-based approaches? LIME aims at detecting positive and negative contribution pixels, while the CAM methods aim at providing the positive contribution region by highlighting the related area

Read more

Summary

Introduction

Synthetic aperture radar (SAR) can realize full-time observation and obtain highresolution SAR images without being restricted by weather and light, it is widely applied in both civil and military fields [1]. Deep learning methods autonomously learn the inner relationship between massive labeled data and the corresponding categories. Even if we can collect enough raw data, manually labeling them is time-consuming and labor-intensive. In this case, SAR image simulation draws increasing attention. Traditional SAR image simulation methods are mainly based on ray-tracing and rasterization theories [4,5,6]. The generated simulation images through traditional methods are quite different from the real SAR images. This limitation can be alleviated with the emergence of various generative adversarial networks (GANs)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call