Abstract

As the misinformation crisis continues, it creates a generation more politically divided than ever before. One of the most concerning types of misinformation are Deepfake videos, which use Generative Adversarial Networks to replace an existing video with another person’s face. Deepfake videos can interfere with diplomatic relations, reduce trust in journalism, and can tamper with court video evidence, which is why it is imperative to detect these videos accurately. Long Short-Term Memory models (LSTMs) are a type of Recurrent Neural Network, meaning that they are able to remember sequential information, which is helpful for processing the frames of a video. LSTMs are special in that they can account for lags between frames of a video, which makes them perfect for Deepfake video detection. One of the potential ways to increase the accuracy of a neural network is normalization, which ensures the input data are on the same scale, so the machine has to process a lower range of data. Because the effect of normalization varies for each dataset, in this study, image normalization was used- each pixel of the frames of Deepfake videos were converted to an RGB value of 0 to 255 to see if this can increase the accuracy of the LSTM model for Deepfake detection. First, a baseline LSTM model was created for Deepfake detection with the Pytorch library, and a classification accuracy of 88.191% was achieved. After that, the first ten frames of the Deepfake videos in the dataset were passed through an image normalization algorithm. This yielded an accuracy of 94.274%, illustrating that the addition of a video normalization algorithm to Deepfake videos increased the accuracy of the Deepfake detection LSTM model by 6.083%. This is a substantial improvement, and the study showed that video normalization can be very beneficial for Deepfake video detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call