Abstract

SAR image target recognition relies heavily on a large number of annotated samples, making it difficult to classify the unseen class targets. Due to the lack of effective category auxiliary information, the current zero-shot target recognition methods for SAR images are limited to inferring only one unseen class rather than classifying multiple unseen classes. To address this issue, a conditional generative network with the category features from the simulated images for zero-shot SAR target recognition is proposed in this paper. Firstly, the deep features are extracted from the simulated images and fused into the category features that characterize the entire class. Then, a conditional VAE-GAN network is constructed to generate the feature instances of the unseen classes. The high-level semantic information shared in the category features aids in generalizing the mapping learned from the seen classes to the unseen classes. Finally, the generated features of the unseen classes are used to train a classifier that can classify the real unseen images. The classification accuracies for the targets of the three unseen classes based on the proposed method can reach 99.80 ± 1.22% and 71.57 ± 2.28% with the SAMPLE dataset and the MSTAR dataset, respectively. The advantage and validity of the proposed architecture are indicated with a small number of the seen classes and a small amount of the training data. Furthermore, the proposed method can be extended to generalized zero-shot recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call