Abstract
In recent investigations concerning manipulated image localization, prevailing methodologies have relied on Convolutional Neural Networks (CNNs) to facilitate supervised learning predicated upon manipulated images and their corresponding ground-truth annotations. Nevertheless, these methods produce high false detection rates when detecting unaltered images due to the limited availability of authentic images. In this paper, we initiate our inquiry by scrutinizing the distribution characteristics of class activation maps within the context of image manipulation detection and the inherent patterns of alterations within this domain. We then introduce the novel concept of a tampering Edge-based Class Activation Map (EdgeCAM) tailored for detecting tampering within manipulated images. Leveraging EdgeCAM, we formulate a framework for localizing tampered regions in images through a weakly-supervised approach, relying solely on image-level annotations. Ultimately, we establish a fundamental benchmark for weakly-supervised image manipulation localization by utilizing genuine tampered images collected from real-world scenarios. The empirical results highlight the notable efficacy of our proposed method, outperforming current weakly-supervised methods for detecting image manipulation by a significant margin of 12.7% on the average combined F1 metrics across various generic datasets. This result also achieves performance comparable to supervised methods for image manipulation localization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.