Abstract

Adversarial examples have begun to receive widespread attention owning to their potential destructions to the most popular DNNs. They are crafted from original images by embedding well-calculated perturbations. In some cases, the perturbations are so slight that neither human eyes nor detection algorithms can notice them, and this imperceptibility makes them more covert and dangerous. For the sake of investigating the invisible dangers in the applications of traffic DNNs, we focus on imperceptible adversarial attacks on different traffic vision tasks, including traffic sign classification, lane detection and street scene recognition. We propose a universal logits map-based attack architecture against image semantic segmentation and design two targeted attack approaches on it. All the attack algorithms generate the micro-noise adversarial examples by the iterative method of C&W optimization and achieve 100% attack rate with very low distortion, among which, our experimental results indicate that the MAE (mean absolute error) of perturbation noise based on traffic sign classifier attack is as low as 0.562, and the other two algorithms based on semantic segmentation are only 1.503 and 1.574. We believe that our research on imperceptible adversarial attacks has a certain reference value to the security of DNNs applications.

Highlights

  • The DNNs applications have shown significant potential in computer vision

  • Some research works in recent years disclose that DNNs are vulnerable to adversarial attacks[6,10]

  • We focus more on the imperceptible adversarial examples, and propose three imperceptible adversarial attacks against different traffic scene recognition, including traffic sign classification, lane detection and street scene segmentation

Read more

Summary

INTRODUCTION

The DNNs applications have shown significant potential in computer vision. some research works in recent years disclose that DNNs are vulnerable to adversarial attacks[6,10]. With some special perturbation on raw images, the adversarial examples can fool the models and make wrong outputs. Adversarial examples can attack image classifiers[6,12,13] and semantic segmentation and object detection models[4,5,8,9,11]. Most existing attack algorithms are with visible distortion or too much additional noise[7,16,18,19], which can be detected and lose their attacks. It is still a large challenge to design adversarial examples which is able to fool computer algorithms and human eyes. Our experiment results reveal that adversarial attacks can be implemented in various network models, and the attack effects can be designed arbitrarily. Our attacks are all white-box attacks[9,10], the three networks involved are the famous MobileNetV2[1], U-net[2] and DeepLabV3+[3], which have been trained on well-known datasets: BelgiumTS[21], Pascal VOC[22] and Kitti[23], respectively

ADVERSARIAL ATTACKS
ADVERSARIAL ATTACK AGAINST TRAFFIC SIGN CLASSIFIER
Method
SEMANTIC SEGMENTATION ATTACK
Lane attack
Street scene recognition attack
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call