Abstract

The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible to humans. The existing adversarial example generation methods mainly attack the whole image. The optimization iterative direction is easy to predict, and the attack flexibility is low. For more complex scenarios, this paper proposes an edge-restricted adversarial example generation algorithm (Re-AEG) based on semantic segmentation. The algorithm can attack one or more specific objects in the image so that the detector cannot detect the objects. First, the algorithm automatically locates the attack objects according to the application requirements. Through the semantic segmentation algorithm, the attacked object is separated and the mask matrix for the object is generated. The algorithm proposed in this paper can attack the object in the region, converge quickly and successfully deceive the deep detection model. The algorithm only hides some sensitive objects in the image, rather than completely invalidating the detection model and causing reported errors, so it has higher concealment than the previous adversarial example generation algorithms. In this paper, a comparative experiment is carried out on ImageNet and coco2017 datasets, and the attack success rate is higher than 92%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call