Abstract

Deep neural networks have been applied to many tasks in air traffic management, ranging from anomaly detection to flight trajectory prediction. However, it has been shown that such algorithms are susceptible to adversarial examples. In this letter, we manage to show that current deep learning algorithms proposed for spoofing detection are vulnerable to maliciously crafted ADS-B data. To inject the false messages into the ADS-B system without being detected, we need to find adversarial perturbations to balance the need of overwhelming the channel noise and keeping the decoding error low. Simulation results demonstrate the viability of our approach to evade the DNN-based spoofing detector without increasing the decoding error.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call