Abstract

Automatic modulation classification (AMC) is to identify the modulation type of a received signal, which plays a vital role to ensure the physical-layer security for Internet of Things (IoT) networks. Inspired by the great success of deep learning in pattern recognition, the convolutional neural network (CNN) and recurrent neural network (RNN) are introduced into the AMC. In general, there are two popular data formats used by AMC, which are the in-phase/quadrature (I/Q) representation and amplitude/phase (A/P) representation, respectively. However, most of AMC algorithms aim at structure innovations, while the differences and characteristics of I/Q and A/P are ignored to analyze. In this article, lots of popular AMC algorithms are reproduced and evaluated on the same data set, where the I/Q and A/P are used, respectively, for comparison. Based on the experimental results, it is found that: 1) CNN-RNN-like algorithms using A/P as input data are superior to those using I/Q at high signal-to-noise ratio (SNR), while it has an opposite result in low SNR and 2) the features extracted from I/Q and A/P are complementary to each other. Motivated by the aforementioned findings, a multitask learning-based deep neural network (MLDNN) is proposed, which effectively fuses I/Q and A/P. In addition, the MLDNN also has a novel backbone, which is made up of three blocks to extract discriminative features, and they are CNN block, bidirectional gated recurrent unit (BiGRU) block, and a step attention fusion network (SAFN) block. Different from most of CNN-RNN-like algorithms (i.e., they only use the last step outputs of RNN), all step outputs of BiGRU can be effectively utilized by MLDNN with the help of SAFN. Extensive simulations are conducted to verify that the proposed MLDNN achieves superior performance in the public benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call