Abstract

AbstractDeep learning (DL) has grown significantly in the field of image forensics. A lot of research has been going on to develop deep learning based image manipulation detection techniques. On the contrary, researchers are also challenging the robustness of these DL-based image forensic techniques by developing efficient anti-forensic schemes. This paper reveals the role of adversarial attacks in image anti-forensics with better human imperceptibility against recent general-purpose image forensic techniques. We propose an image anti-forensic framework by using recent adversarial attacks, i.e., Fast Gradient Sign Method (FGSM), Carlini and Wagner (C &W), and Projected Gradient Descent (PGD). Firstly, we have trained recent image forensic models on the BOSSBase dataset. Then, we generate adversarial noise by using the gradient of these image forensic models corresponding to each adversarial attack. Afterward, the obtained noise is added to the input image, resulting in the adversarial image corresponding to the particular attack. These adversarial images are generated by using the BOSSBase dataset and tested on the recent image forensic models. The experimental results show that the performance of the recent forensic models has decreased rapidly in the range from 50–75% against the different adversarial attacks, i.e., FGSM, C &W, and PGD. Furthermore, the high human imperceptibility of generated adversarial images is confirmed from the PSNR values. KeywordsAnti-forensicsAdversarial attackImage manipulation detection

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call