Abstract

Abstract The emergence of new autonomous driving systems and functions – in particular, systems that base their decisions on the output of machine learning subsystems responsible for environment perception – brings a significant change in the risks to the safety and security of transportation. These kinds of Advanced Driver Assistance Systems are vulnerable to new types of malicious attacks, and their properties are often not well understood. This paper demonstrates the theoretical and practical possibility of deliberate physical adversarial attacks against deep learning perception systems in general, with a focus on safety-critical driver assistance applications such as traffic sign classification in particular. Our newly developed traffic sign stickers are different from other similar methods insofar that they require no special knowledge or precision in their creation and deployment, thus they present a realistic and severe threat to traffic safety and security. In this paper we preemptively point out the dangers and easily exploitable weaknesses that current and future systems are bound to face.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.