Abstract

Creating deepfakes has rapidly become easier and more accessible due to advancements in hardware and computing. The harmful nature of deepfakes urges immediate action to improve detection of such doctored videos. In this work, we build a deep hybrid neural network model to detect deepfake videos. Using facial landmarks detection, we extract data pertaining to various facial attributes from the videos. This data is passed to a multilayer perceptron to learn differences in real and deepfake videos. Simultaneously, we use a convolutional neural network to extract features and train on the videos. We combine these two models to build a multi-input deepfake detector. A subset of the Deepfake Detection Challenge Dataset along with the Dessa Dataset is used to train the model. The proposed model provides good classification results with an accuracy of 84% and an AuC score of 0.87.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call