Abstract

Although convolutional neural networks (CNN) have shown excellent performance in many image recognition tasks, it commonly requires a lot of labeled data, and the recognition effect is frequently unsatisfied due to the limited labeled training data. In recent years, capsule network (CapsNet) has been shown to achieve a high recognition accuracy with a small group of training samples. In this study, a class separable loss based on cosine similarity is suggested to enhance the distinguishability of the extracted network. It is added as a regularization term to the original loss function to train the network, narrowing the intra-class difference and increasing the inter-class difference in each iteration. Meanwhile, a multi-dimensional parallel capsule module is established to obtain robust features and spatial relationships from the original images. Feature maps from convolution of different levels are extracted as the input of this module. Structural features derived from low-level convolution and semantic features derived from high-level convolution are used for low-dimensional capsule coding and high-dimensional capsule coding, respectively. In our experiment, the general moving and stationary target acquisition and recognition (MSTAR) database is used. We find that the accuracy of the multi-dimensional parallel capsule network with class separable loss (MdpCaps-Csl) is 99.79% using all training samples, which is higher than most current recognition methods. More importantly, the accuracy is up to 97.73% even if only 10% training samples are applied, indicating MdpCaps-Csl can make excellent performance upon limited training samples.

Highlights

  • As an active microwave remote sensing imaging system, synthetic aperture radar (SAR) can penetrate clouds and vegetations to identify covered information with high resolution and is little influenced by weather conditions [1]

  • The data was collected using the sandia national laboratory (SNL) x-band SAR sensor platform in spotlight imaging mode and was co-funded by the defense advanced research projects agency (DARPA) and air force research laboratory (AFRL). This resolution of database is 0.3m × 0.3m and the omnidirectional angle coverage is 0◦ − 360◦. It has been widely used in the examination and evaluation of SAR image target recognition algorithms

  • capsule network (CapsNet) is very effective in collecting pose information and spatial attributes of the images compared with convolutional neural networks (CNN) and it can learn well with a small group of data

Read more

Summary

INTRODUCTION

As an active microwave remote sensing imaging system, synthetic aperture radar (SAR) can penetrate clouds and vegetations to identify covered information with high resolution and is little influenced by weather conditions [1]. The main contributions of our work can be summarized in the following three aspects: 1) In order to enhance the CapsNet’s extraction capability for SAR image data, a class separable loss based on cosine similarity is added to the improved loss function It is used as a regularization term, which can reduce intra-class differences and increase the inter-class differences during feature extraction. A class separable loss based on cosine similarity and a multi-dimensional parallel capsule module are proposed to enhance the feature extraction ability and improve the recognition performance of the network

CLASS SEPARABLE LOSS
EXPERIMENTS ON PARTIAL TRAINING SAMPLES
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.