Abstract

In an end-to-end vehicle control scenario, where a deep neural network is trained on visual input solely, adversarial vulnerability leaves a possibility to manipulate the steering predictions. Patch-based adversarial attacks present an especially serious menace, because they can be performed in the real world by printing out a generated universal pattern. However, the boundary conditions and feasibility of such attacks to compromise the security of autonomous vehicles have been only sparsely studied so far.We demonstrate and evaluate such attacks in the CARLA simulative environment under different weather and lighting settings, while conducting experiments in open and closed loop attack scenarios. Our findings reveal that attack strength is highly dependent on the surrounding location as well as on environment conditions. We also observe that attack success in an open loop scenario only partially coincides with that in a closed loop scenario. This analysis helps to set the stage for future experiments on public roads.Furthermore, we propose a defense concept to remove malignant perturbations from an input image, which does not affect its salient regions. We analyze deviations from the unattacked vehicle trajectory both on adversarial and suppressed inputs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.