Abstract
Reconnaissance unmanned aerial vehicles are specifically designed to estimate parameters and process intercepted signals for the purpose of identifying and locating radars. However, distinguishing quasi-simultaneous arrival signals (QSAS) has become increasingly challenging in complex electromagnetic environments. In order to address the problem, a framework for self-supervised deep representation learning is proposed. The framework consists of two phases: (1) pre-train an autoencoder. For learning the unlabeled QSAS representation, the ConvNeXt V2 is trained to extract features from masked time–frequency images and reconstruct the corresponding signal in both time and frequency domains; (2) transfer the learned knowledge. For downstream tasks, encoder layers are frozen, the linear layer is fine-tuned to classify QSAS under few-shot conditions. Experimental results demonstrate that the proposed algorithm can achieve an average recognition accuracy of over 81% with the signal-to-noise ratio in the range of −16∼16 dB. Compared to existing CNN-based and Transformer-based neural networks, the proposed algorithm shortens the time of testing by about 11× and improves accuracy by up to 21.95%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.