Abstract

In the recent period there has been massive progress in synthetic image generation and manipulation which significantly raises concerns for its ill applications towards society. This would result in spreading false information, leading to loss of trust in digital content. This paper introduces an automated and effective approach to get facial expressions in videos, and especially focused on the latest method used to produce hyper realistic fake videos: Deepfake. Using faceforenc++ dataset for training our model, we achieved more that 99% successful detection rate in Deepfake, Face2Face, faceSwap and neural texture. Regular image forensics techniques are usually not very useful, because of the strong deterioration of data due to the compression. Thus, this paper follows a layered approach with first detecting the subject with the help of existing facial recognition networks followed by extracting facial features using CNN, then passing through the LSTM layer, where we make use of our temporal sequence for face manipulation between frames. Finally use of the Recycle-GAN which internally makes use of generative adversarial networks to merge spatial and temporal data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.