Abstract

Copy-move is a very popular image falsification where a semantically coherent part of the image, the source area, is copied and pasted at another position within the same image as the so-called target area. The majority of existing copy-move detectors search for matching areas and thus identify the source and target zones indifferently, while only the target really represents a tampered area. To the best of our knowledge, at the moment of preparing this paper there has been only one published method called BusterNet that is capable of performing source and target disambiguation by using a specifically designed deep neural network. Different from the deep-learning-based BusterNet method, we propose in this paper a source and target disentangling approach based on local statistical model of image patches. Our proposed method acts as a second-stage detector after a first stage of copy-move detection of duplicated areas. We had the following intuition: even if no manipulation (e.g., scaling and rotation) is added on target area, its boundaries should expose a statistical deviation from the pristine area and the source area; further, if the target area is manipulated, the deviation should appear not only on the boundaries but on the full zone. Our method relies on machine learning tool with Gaussian Mixture Model to describe likelihood of image patches. Likelihoods are then compared between the pristine region and the candidate source/target areas as identified by the first-stage detector. Experiments and comparisons demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call