Abstract

Synthetic aperture radar (SAR) image recognition is an important stage of SAR image interpretation. The standard convolutional neural network (CNN) has been successfully applied in the SAR image recognition due to its powerful feature extraction capability. Nevertheless, the CNN requires numerous labeled samples for satisfactory recognition performance, while the performance of the CNN decreases greatly with insufficient labeled samples. Aiming at improving the SAR image recognition accuracy with a small number of labeled samples, a new few-shot learning method is proposed in this paper. We first utilize the attention prototypical network (APN) to calculate the average features of the support images from each category, which are adopted as the prototypes. Afterwards, the feature extraction is performed on the query images using the attention convolutional neural network (ACNN). Finally, the feature matching classifier (FMC) is adopted for calculating the similarity scores between the feature maps and the prototypes. We embed the attention model SENet to the APN, ACNN, and FMC, which effectively enhances the expression of the prototypes and the feature maps. Besides, the loss function of our method consists of cross-entropy and prototype-separability losses. In the training process, this loss function increases the separability of different prototypes, which contributes to higher recognition accuracy. We perform experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) and the Vehicle and Aircraft (VA) datasets. It has been proved that our method is superior to the related state-of-the-art few-shot image recognition methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call