Abstract

Deep learning techniques have become popular for performing camera model identification. To expose weaknesses in these methods, we propose a new anti-forensic framework that utilizes a generative adversarial network (GAN) to falsify an image's source camera model. Our proposed attack uses the generator trained in the GAN to produce an image that can fool a CNN-based camera model identification classifier. Moreover, our proposed attack will only introduce a minimal amount of distortion to the falsified image that is not perceptible to human eyes. By conducting experiments on a large amount of data, we show that the proposed attack can successfully fool a state-of-art camera model identification CNN classifier with 98% probability and maintain high image quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call