Abstract
Radar data mining is the key module for signal analysis, where patterns hidden inside of signals are gradually available in the learning process and its superiority is significant for enhancing the security of the radar emitter classification (REC) system. Owing to the disadvantage that radio frequency fingerprinting (RFF) caused by the imperfection of emitter’s hardware is difficult to forge, current deep-learning REC methods based on deep-learning techniques, e.g., convolutional neural network (CNN) and long short term memory (LSTM) are difficult to capture the stable RFF features. In this paper, an online and non-cooperative multi-modal generic representation auxiliary learning REC model, namely muti-modal generic representation auxiliary learning networks (MGRALN), is put forward. Multi-modal means that multi-domain transformations are unified to a generic representation. After this, the representation is employed to facilitate mining the implicit information inside of the signals and to perform the better model robustness, which is achieved by using the available generic genenation to guide the network training and learning. Online means the learning process of REC is only once and the REC is end-to-end. Non-cooperative denotes no demodulation techniques are used before the REC task. Experimental results on the measured civil aviation radar data demonstrate that the proposed method enables one to achieve superior performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.