Abstract

Convolutional neural networks (CNNs) have made tremendous success in optical images classification recently. However, in synthetic aperture radar (SAR) target classification, it is difficult to annotate a large amount of real SAR images to train CNNs. Sufficient annotated images can be easily obtained through simulation, but the disparity between the simulated images and the real images makes them difficult to directly apply to the real images classification. In this paper, we propose a model that integrates multi-kernel maximum mean discrepancy (MK-MMD) and domain-adversarial training to alleviate this problem. Simulated SAR images with annotation and unlabeled real SAR images are used to train our model. First, we use domain-adversarial training to prompt the model to extract domain-invariant features. Then, the MK-MMD between the hidden representations of simulated images and real images is reduced to narrow domain discrepancy. Experimental results on the real SAR dataset demonstrate that our method effectively solves the domain shift problem and improves the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call