Abstract

Deep-learning-based automatic modulation recognition (AMR) has recently attracted significant interest due to its high recognition accuracy and the lack of a need to manually set classification standards. However, it is extremely challenging to achieve a high recognition accuracy in increasingly complex channel environments and balance the complexity. To address this issue, we propose a multi-modal AMR neural network model with SNR segmentation called M-LSCANet, which integrates an SNR segmentation strategy, lightweight residual stacks, skip connections, and an attention mechanism. In the proposed model, we use time domain I/Q data and constellation diagram data only in medium and high signal-to-noise (SNR) regions to jointly extract the signal features. But for the low SNR region, only I/Q signals are used. This is because constellation diagrams are very recognizable in the medium and high SNRs, which is conducive to distinguishing high-order modulation. However, in the low SNR region, excessive similarity and the blurring of constellations caused by heavy noise will seriously interfere with modulation recognition, resulting in performance loss. Remarkably, the proposed method uses lightweight residuals stacks and rich ski connections, so that more initial information is retained to learn the constellation diagram feature information and extract the time domain features from shallow to deep, but with a moderate complexity. Additionally, after feature fusion, we adopt the convolution block attention module (CBAM) to reweigh both the channel and spatial domains, further improving the model’s ability to mine signal characteristics. As a result, the proposed approach significantly improves the overall recognition accuracy. The experimental results on the RadioML 2016.10B public dataset, with SNR ranging from −20 dB to 18 dB, show that the proposed M-LSCANet outperforms existing methods in terms of classification accuracy, achieving 93.4% and 95.8% at 0 dB and 12 dB, respectively, which are improvements of 2.7% and 2.0% compared to TMRN-GLU. Moreover, the proposed model exhibits a moderate parameter number compared to state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.