Abstract

The introduction of deep learning (DL) technology can improve the performance of cyber–physical systems (CPSs) in many ways. However, this also brings new security issues. To tackle these challenges, this article explores the vulnerabilities of DL-based unmanned aerial vehicles (UAVs), which are typical CPSs. Although many research works have been reported previously on adversarial attacks of DL models, only few of them are concerned about safety-critical CPSs, especially regression models in such systems. In this article, we analyze the problem of adversarial attacks against DL-based UAVs and propose two adversarial attack methods against regression models in UAVs. The experiments demonstrate that the proposed nontargeted and targeted attack methods both can craft imperceptible adversarial images and pose a considerable threat to the navigation and control of UAVs. To address this problem, adversarial training and defensive distillation methods are further investigated and evaluated, increasing the robustness of DL models in UAVs. To our knowledge, this is the first study on adversarial attacks and defenses against DL-based UAVs, which calls for more attention to the security and safety of such safety-critical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call