Abstract

Convolutional neural networks (CNNs) have made tremendous success in optical images classification recently. However, in synthetic aperture radar (SAR) target classification, it is difficult to annotate a large amount of real SAR images to train CNNs. Sufficient annotated images can be easily obtained through simulation, but the disparity between the simulated images and the real images makes them difficult to directly apply to the real images classification. In this paper, we propose a model that integrates multi-kernel maximum mean discrepancy (MK-MMD) and domain-adversarial training to alleviate this problem. Simulated SAR images with annotation and unlabeled real SAR images are used to train our model. First, we use domain-adversarial training to prompt the model to extract domain-invariant features. Then, the MK-MMD between the hidden representations of simulated images and real images is reduced to narrow domain discrepancy. Experimental results on the real SAR dataset demonstrate that our method effectively solves the domain shift problem and improves the classification accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.