Abstract
AbstractIdentifying the intentions of aerial targets is crucial for air situation understanding and decision making. Deep learning, with its powerful feature learning and representation capability, has become a key means to achieve higher performance in aerial target intention recognition (ATIR). However, conventional supervised deep learning methods rely on abundant labelled samples for training, which are difficult to quickly obtain in practical scenarios, posing a significant challenge to the effectiveness of training deep learning models. To address this issue, this paper proposes a novel few‐label ATIR method based on deep contrastive learning, which combines the advantages of self‐supervised learning and semi‐supervised learning. Specifically, leveraging unlabelled samples, we first employ strong and weak data augmentation views and the temporal contrasting module to capture temporally relevant features, whereas the contextual contrasting module is utilised to learn discriminative representations. Subsequently, the network is fine‐tuned with a limited set of labelled samples to further refine the learnt representations. Experimental results on an ATIR dataset demonstrate that our method significantly outperforms other few‐label classification baselines in terms of recognition accuracy and Macro F1 score when the proportion of labelled samples is as low as 1% and 5%.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have