Abstract

ABSTRACTDue to high computational cost and large memory overhead, it is difficult to deploy original deep convolutional neural network (CNN) on real-time embedded devices of synthetic aperture radar (SAR) target recognition. In addition, the existing lightweight methods have a trade-off between compression ratio and recognition accuracy. In this paper, a lossless lightweight design strategy is proposed for CNN to efficiently achieve the SAR target recognition, which subtly utilizes pruning and knowledge distillation. Specifically, the structured pruning is firstly performed on convolutional networks layer by layer to yield the lightweight network, which is subsequently treated as the student network. Then, the pruned network is refined by the knowledge distillation with the help of the teacher network (i.e., the unpruned and well-trained networks) to recovery the accuracy. Moreover, the weight sharing can be adopted to further reduce the weight storage without affecting the final overall accuracy. Experiments of moving and stationary target acquisition and recognition (MSTAR) 10-class recognition on all-convolutional networks (A-ConvNets) and visual geometry group network (VGGNet) demonstrate that it can, respectively, achieve 65.68 and 344 lossless compression, and reduce the computational cost by 2.5 and 18 times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call