Abstract

In the field of electronic countermeasure, the recognition of radar signals is extremely important. This paper uses GNU Radio and Universal Software Radio Peripherals to generate 10 classes of close-to-real multipulse radar signals, namely, Barker, Chaotic, EQFM, Frank, FSK, LFM, LOFM, OFDM, P1, and P2. In order to obtain the time-frequency image (TFI) of the multipulse radar signal, the signal is Choi–Williams distribution (CWD) transformed. Aiming at the features of the multipulse radar signal TFI, we designed a distinguishing feature fusion extraction module (DFFE) and proposed a new HRF-Net deep learning model based on this module. The model has relatively few parameters and calculations. The experiments were carried out at the signal-to-noise ratio (SNR) of −14 ∼ 4 dB. In the case of −6 dB, the recognition result of HRF-Net reached 99.583% and the recognition result of the network still reached 97.500% under −14 dB. Compared with other methods, HRF-Nets have relatively better generalization and robustness.

Highlights

  • [6] designed an algorithm based on stacked autoencoders and support vector machines (SVM). is method obtained the time-frequency diagram of radar signals through Choi–Williams distribution, used stacked autoencoders to automatically extract features, and completed signal recognition through SVM

  • GNU Radio, USRP N210, and USRP-LW N210 are used to generate close-to-real radar signals with high reliability. 10 classes of multipulse radar signals are generated between −14 and 4 dB in signal-to-noise ratio (SNR), namely, Barker, Chaotic, EQFM, Frank, FSK, LFM, LOFM, OFDM, P1, and P2

  • Based on the distinguishing feature fusion extraction module (DFFE) module, we propose three deep convolutional neural network structures, namely, high-resolution feature fusion extraction networks (HRFNets), which are HRF-Net157, HRF-Net187, and HRFNet217

Read more

Summary

DFFE Module and HRF-Nets

En, Sigmoid function is used for activation to obtain a comprehensive two-dimensional spatial weight feature matrix It emphasizes the spatial position of high correlation and weakens the spatial position of less correlation, focusing on which areas of the input image are more distinguishable. Based on the DFFE module, we propose three deep convolutional neural network structures, namely, high-resolution feature fusion extraction networks (HRFNets), which are HRF-Net157, HRF-Net187, and HRFNet217. C-[MaxPool, AvgPool] represents the compression of the image spatial dimensions to obtain a feature map with channel weight coefficients. Erefore, this paper first adopts the Global Average Pooling (GAP) [24] and uses single-layer full connection, which obviously reduces the calculation cost and makes the network have relatively high real-time performance. Signal differs slightly, HRF-Net157 has the highest cost performance and is a better choice

Experimental Results
Methods
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call