Abstract

The application of deep learning improves the processing speed and the accuracy of automatic modulation recognition (AMR). As a result, it realizes intelligent spectrum management and electronic reconnaissance. However, deep learning-aided AMR usually requires a large number of labeled samples to obtain a reliable neural network model. In practical applications, due to economic costs and privacy constraints, there is a small number of labeled samples but a large number of unlabeled samples. This paper proposes a Transformer-based contrastive semi-supervised learning framework for AMR. First, self-supervised contrastive pre-training of the Transformer-based encoder is completed using unlabeled samples, and data augmentation is realized through time warping. Then, the pre-trained encoder and a randomly initialized classifier are fine-tuned using labeled samples, and hierarchical learning rates are employed to ensure classification accuracy. Considering the problems of applying Transformer to AMR, a convolutional transformer deep neural network is proposed, which involves convolutional embedding, attention bias, and attention pooling. In experiments, the feasibility of the framework is analyzed through linear evaluation of the framework components on the RML2016.10a dataset. Also, the proposed framework is compared with existing semi-supervised methods on RML2016.10a and RML2016.10b datasets to verify its superiority and stability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call