Abstract

The rapid improvements in capabilities of neural networks and generative adversarial networks (GANs) has given rise to extremely sophisticated deepfake technologies. This has made it very difficult to reliably recognize fake digital content. It has enabled the creation of highly convincing synthetic media which can be used in malicious ways in this era of user generated information and social media. Existing deepfake detection techniques are effective against early iterations of deepfakes but get increasingly vulnerable to more sophisticated deepfakes and adversarial attacks. In this paper we explore a novel approach to deepfake detection which uses a framework to integrate adversarial training to improve the robustness and accuracy of deepfake detection models. By looking deeper into state of art adversarial machine learning, forensic analysis and deepfake detection techniques we will explore how adversarial training can improve the robustness of deep fake detection techniques against future threats. We will use perturbations which are adversarial examples designed specifically to deceive the deepfake detection algorithms. By training deepfake detection models with these perturbations we will create detection systems that can more accurately identify deepfakes. Our approach shows promise and avenues for future research in building resilience against deepfakes and applications in content moderation, security and combating synthetic media manipulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.