Abstract
Automatic dependent surveillance-broadcast (ADS-B) has been widely used due to its low cost and high precision. The deep learning methods for ADS-B signal classification have achieved a high performance. However, recent studies have shown that deep learning networks are very sensitive and vulnerable to small noise. An ADS-B signal poisoning method based on Generative Adversarial Network is proposed. This method can generate poisoned signals. One of ADS-B signal classification networks is assigned as the attacked network and another one as the protected network. When poisoned signals are fed into these two well-performed classification networks, the poisoned signal will be recognized incorrectly by the attacked network while classified correctly by the protected network. An attack-protect-similar loss function is further proposed to achieve ‘triple-win’ in leading attacked network poor performance, protected network well performance and the poisoned signals similar to unpoisoned signals. Experimental results show that the attacked network classifies poisoned signals with 1.55% classification accuracy, while the protected network classifies rate is still maintained at 99.38%.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have