Abstract

In the domain of general image forgery detection, a myriad of different classification solutions have been developed to distinguish a “tampered” image from a “pristine” image. In this work, we aim to develop a new method to tackle the problem of binary image forgery detection. Our approach builds upon the extensive training that state-of-the-art image classification models have undergone on regular images from the ImageNet dataset, and transfers that knowledge to the image forgery detection space. By leveraging transfer learning and fine tuning, we can fit state-of-the-art image classification models to the forgery detection task. We train the models on a diverse and evenly distributed image forgery dataset. With five models—EfficientNetB0, VGG16, Xception, ResNet50V2, and NASNet-Large—we transferred and adapted pre-trained knowledge from ImageNet to the forgery detection task. Each model was fitted, fine-tuned, and evaluated according to a set of performance metrics. Our evaluation demonstrated the efficacy of large-scale image classification models—paired with transfer learning and fine tuning—at detecting image forgeries. When pitted against a previously unseen dataset, the best-performing model of EfficientNetB0 could achieve an accuracy rate of nearly 89.7%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call