Abstract

Abstract The emergence of new autonomous driving systems and functions – in particular, systems that base their decisions on the output of machine learning subsystems responsible for environment perception – brings a significant change in the risks to the safety and security of transportation. These kinds of Advanced Driver Assistance Systems are vulnerable to new types of malicious attacks, and their properties are often not well understood. This paper demonstrates the theoretical and practical possibility of deliberate physical adversarial attacks against deep learning perception systems in general, with a focus on safety-critical driver assistance applications such as traffic sign classification in particular. Our newly developed traffic sign stickers are different from other similar methods insofar that they require no special knowledge or precision in their creation and deployment, thus they present a realistic and severe threat to traffic safety and security. In this paper we preemptively point out the dangers and easily exploitable weaknesses that current and future systems are bound to face.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call