With the rapid development of image editing techniques, forensic analysis to detect malicious image manipulation has become an important research topic. The current image manipulation detection and localization methods can accommodate diverse forms of tampering. However, their approach to handling various tampering detection types is limited to a uniform regression pathway. This approach fails to recognize the unique characteristics of copy-move tampering, which is significantly different from other tampering types. Employing a generic detection methodology indiscriminately poses the risk of confusing the training regression trajectory of the deep learning models. To mitigate this challenge, this paper introduces a novel framework featuring a dual-decoding branch structure specifically designed to augment features pertinent to copy-move tampering types. Moreover, it facilitates the detection of tampered regions, irrespective of the tampering type, within the main branch. To achieve this goal, we first introduce a contrastive augmentation module in the encoder, which maximizes the feature space distance between the manipulation regions and pristine regions. Next, we design a parallel attention module to extract more diverse multiscale features. Moreover, we introduce a constrained shifted-window dual attention module to extract tampering noise features. In the decoder, we design a dual-decoding branch to capture both the homologous and tampering features, and we employ contrastive learning to minimize the feature space distance of the homologous regions for copy-move manipulation detection. Finally, we design a category normalization loss function to balance the model’s attention across each category. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art performance on various benchmark datasets.
Read full abstract