Abstract

Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the cross-task, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pile-up, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call