The rise of deep fake technology has ignited widespread societal apprehensions about potential security risks and the dissemination of false information. Despite extensive research into deepfake detection, effectively discerning low-quality deepfakes and simultaneously identifying variations in their quality remains a significant and formidable challenge. This investigation explores the dynamic field of deep fake detection, focusing specifically on video analysis targeting facial manipulations. The study introduces Celeb-DF, a substantial dataset comprising high- quality deep fake videos of celebrities, challenging prevailing detection methods. Additionally, a revolutionary Quality-Agnostic Deep Fake Detection (QAD) framework addresses the intricate task of simultaneously recognizing diverse qualities of deep fakes, surpassing established benchmarks across multiple datasets. The paper highlights ongoing efforts to enhance deep fake detection strategies, incorporating advanced models such as Stable Diffusion, and employing interpretability-through-prototypesn by merging fine-tuned Vision Transformers with Support Vector Machines. Keywords -- Deep fake technology, security risks, deep fake detection, Celeb-DF dataset, video analysis, Quality-Agnostic Deep Fake Detection (QAD), Stable Diffusion, Vision Transformers, Support Vector Machines.
Read full abstract