Abstract

Deep learning model has good performance in image classification, target detection, face recognition and other tasks, but it is prone to misjudgment in the face of adversarial attack. Research on adversarial sample generation method can improve the security of deep learning model. Compared with the algorithm based on spatial transform, the existing generation methods of adversarial samples based on pixel value perturbation of the original image can effectively reduce the construction time of the adversarial samples, but the generated adversarial samples are significantly different from the original image in perception, which can be easily find by human eyes. The method proposed in this paper aims to reduce the attack time and ensure the similarity of visual perception and attack success rate between the adversarial sample and the original image. The simulation results show that compared with traditional adversarial sample generation method based on spatial transform, this method can reduce time about 30%, at the same time, this method can effectively guarantee the human eye observation under the adversarial samples and the similarity of the original image.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call