Abstract
With the growing popularity of social media and an outburst of malicious tampering methods, it has become easier than ever to mislead others and create disharmony in society by manipulating social media content. Hence, there is an urgent need to build robust manipulation detection systems. This manuscript proposes a novel end-to-end architecture that extracts discriminative “manipulation residuals” (MR) and “textural” (T) features for facial manipulation detection. Most multi-branch architectures suffer from a common flaw in their fusion stage, where they combine multi-domain features in equal proportion. This is disadvantageous since each feature may not be equally important in making the final prediction. MRT-Net solves this problem by implementing an auto-adaptive weighting mechanism to select an ideal proportion of manipulation residual and textural information that prove complementary to one another. Specifically, two weighting factors for the MR and T features, ∝1 and ∝2 are added as parameters to the proposed neural network and are updated automatically via backpropagation, thereby allowing MRT-Net to find the ideal mix of residual and textural information. Additionally, MRT-Net benefits from a channel attention mechanism that boosts its performance even further. MRT-Net achieves excellent performance on three public benchmark datasets, Deep Fake Detection Challenge (DFDC), CelebDF and FaceForensics++ (FF++). AUC scores achieved are 0.9964 on DFDC, 0.9921 on CelebDF, 0.9910 on FF++(DeepFake), 0.9974 on FF++(Face2Face), 0.9942 on FF++(FaceShifter), 0.9933 on FF++(FaceSwap) and 0.9662 on FF++(NeuralTextures). It also achieves accuracy scores of 0.9760 on DFDC, 0.9815 on CelebDF, 0.9670 on FF++(DeepFake), 0.9767 on FF++(Face2Face), 0.9611 on FF++(FaceShifter), 0.9676 on FF++(FaceSwap) and 0.9025 on FF++(NeuralTextures). These excellent results demonstrate MRT-Net's potency as it comfortably outperforms other state-of-the-art face manipulation detection methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.