Abstract
The ease of deepfake videos are created indicates that there is an increasing need for authentication methods to classify deepfake videos. This issue is very sensitive, deepfakes make it possible to make someone say or do something they never said or did in a video, this puts a person’s dignity at risk. Another problem arises with the large size of the deepfake video dataset, because large datasets require longer training times and high computational specifications. This paper describes an approach to deepfake video classification using a small dataset and image processing. We propose a method that applies MTCNN (Multi-task Cascaded Convolutional Networks) to capture face data on video frames, image processing in the form of Gaussian filters and Local Binary Pattern (LBP) with Xception model for deepfake video classification and ResNet-50 as comparison. We use a dataset totaling 2000 frames from the entire Celeb-DF(V2) dataset. The results show that the proposed method, the Xception model, has better performance with an AUC value of 0.87 and an accuracy value of 0.79 compared to the ResNet-50 model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.