Abstract

The existing synthetic aperture radar (SAR) automatic target recognition (ATR) methods have shown impressive results in static scenarios, yet the performance drops sharply as new categories of targets are continuously increased. In response to this problem, this letter proposes a novel ATR method named multi-level adaptive knowledge distillation network (MLAKDN) to achieve incremental SAR target recognition. To be specific, an adaptive weighted distillation strategy is first proposed, which can alleviate the model from forgetting the knowledge of old categories by distilling multi-stage soft label information of old categories at the classification level. Then, a feature distillation method based on gradient maximum criterion is developed to filter and distill discriminative features, so as to further recall more knowledge of old categories at the feature level. Meanwhile, a model rebalancing technique is designed to effectively strike the balance of the model on new categories and old categories. Finally, a weighted incremental classification loss is presented to train the whole model. Experiments on the moving and stationary target acquisition and recognition (MSTAR) dataset and the synthetic and measured paired labeled experiment (SAMPLE) dataset illustrate that the proposed method is superior to some state-of-the-arts for incremental SAR target recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call