Abstract

Adversarial attacks are being frequently used these days to exploit different machine learning models including the deep neural networks (DNN) either during the training or testing stage. DNN under such attacks make the false predictions. Digital adversarial attacks are not applicable in physical world. Adversarial attack on object detection is more difficult as compared to the adversarial attack on image classification. This paper presents a physical adversarial attack on object detection using 3D adversarial objects. The proposed methodology overcome the constraint of 2D adversarial patches as they only work for certain viewpoints only. We have mapped an adversarial texture onto a mesh to create the 3D adversarial object. These objects are of various shapes and sizes. Unlike adversarial patch attacks, these adversarial objects are movable from one place to another. Moreover, application of 2D patch is limited to confined viewpoints. Experimentation results show that our 3D adversarial objects are free from such constraints and perform a successful attack on object detection. We used the ShapeNet dataset for different vehicle models. 3D objects are created using Blender 2.93 [1]. Different HDR images are incorporated to create the virtual physical environment. Moreover, we targeted the FasterRCNN and YOLO pre-trained models on the COCO dataset as our target DNN. Experimental results demonstrate that our proposed model successfully fooled these object detectors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call