Abstract

Deep learning algorithms have become so potent due to increased computing power that it is now relatively easy to produce human-like synthetic videos, sometimes known as & quot; deep fakes. & quot; It is simple to imagine scenarios in which these realistic face switched deep fakes are used to extort individuals, foment political unrest, and stage fake terrorist attacks. This paper provides a deep learning strategy novel for the efficient separation of fraudulent films produced by AI from actual ones. Automatically spotting replacement and recreation deep fakes is possible with our technology. To combat artificial intelligence, we are attempting to deploy artificial intelligence. The framelevel characteristics are extracted by our system using a Res-Next Convolution neural network, and later these features are applied to train an LSTM-based recurrent neural network to determine if those submitted video is being altered in any way or not, i.e. whether it is a deep fake or authentic video. We test our technique on a sizable quantity of balanced and mixed data sets created by combining the different accessible data sets, such as Face-Forensic++[1], Deep fake detection challenge[2], and Celeb-DF[3], in order to simulate real-time events and improve the model's performance on real-time data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call