Abstract

Adversarial attacks on Deep Neural Networks represent a critical challenge in the adoption of DNNs in critical applications. However, - and in spite of its great need, - there is significant mystery surrounding attacks on DNNs. One reason for this is the lack of a platform that enables users to get a hands-on, intuitive understanding of the attacks. In this paper, we address this problem by designing an extensible, configurable exploration platform for studying various attacks on DNNs. Our platform specifically focuses on DNNs deployed in Computer Vision modules of automotive systems. Using the platform, the user can perform various adversarial machine learning attacks, such as evasion attacks and image-perturbation attacks, and comprehend their adversarial effects on autonomous vehicles. The platform can be used to plug and play with various neural network models developed for Traffic Sign Recognition systems in autonomous vehicles. The infrastructure includes both physical and mixed-reality variants, and we demonstrate the usage of the platform on two traffic sign recognition models with different adversarial attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.