Abstract

In synthetic aperture radar (SAR) automatic target recognition, it is expensive and time-consuming to annotate the targets. Thus, training a network with a few labeled data and plenty of unlabeled data attracts attention of many researchers. In this article, we design a semisupervised learning framework including self-consistent augmentation rule, mixup-based mixture, and weighted loss, which allows a classification network to utilize unlabeled data during training and ultimately alleviates the demand of labeled data. The proposed self-consistent augmentation rule forces the samples before and after augmentation to share the same labels to utilize the unlabeled data, which can ensure the prominent effect of supervised learning part of the framework for training by balancing amounts of labeled and unlabeled samples in a minibatch, and makes the network achieve better performance. Then, a mixture method is introduced to mix the labeled, unlabeled, and augmented samples for the better involvement of label information in the mixed samples. By using cross-entropy loss for the mixed-labeled mixtures and mean-squared error loss for the mixed-unlabeled mixtures, the total loss is defined as the weighted sum of them. The experiments on the MSTAR data set and OpenSARShip data set show that the performance of the method is not only far better than the state of the art among current semisupervised-based classifiers but also near to the state of the art among the supervised learning-based networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call