Abstract

Over the past few years, free mobile application tools on Artificial Intelligence and deep learning have made it easy to create reliable face exchanges in a video called “DeepFake” (DF) video that leaves a little hint of traces to check if its fake. Creating a computerized edited video has been demonstrated for quite a long time by actually taking advantage of enhanced visual effects. Recently, Artificial Intelligence has led to a rise in fake content and the ability to access free tools to create it. These purported AI-engineered media are normally called DeepFake(DF). Making a DF with a computerized AI tool is a simple job. However, with regards to identifying this DF, it's a major challenging dispute. Preparing the calculations and training the model to distinguish DF is difficult. The challenge to train an algorithm model to spot the DeepFake (DF) is not simple. We have tried recognizing DF with the use of CNN and RNN. The framework utilizes a CNN for feature extraction at the frame level. The model uses features extracted from the frame level to train the RNN, which then learns to classify videos according to their temporal inconsistencies. Anticipated results, when compared with a large number of hoax videos, were gathered from standard datasets. Using a simple architecture, we will show you how this errand can make your framework accurate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.