Abstract

In the recent period there has been massive progress in synthetic image generation and manipulation which significantly raises concerns for its ill applications towards society. This would result in spreading false information, leading to loss of trust in digital content. This paper introduces an automated and effective approach to get facial expressions in videos, and especially focused on the latest method used to produce hyper realistic fake videos: Deepfake. Using faceforenc++ dataset for training our model, we achieved more that 99% successful detection rate in Deepfake, Face2Face, faceSwap and neural texture. Regular image forensics techniques are usually not very useful, because of the strong deterioration of data due to the compression. Thus, this paper follows a layered approach with first detecting the subject with the help of existing facial recognition networks followed by extracting facial features using CNN, then passing through the LSTM layer, where we make use of our temporal sequence for face manipulation between frames. Finally use of the Recycle-GAN which internally makes use of generative adversarial networks to merge spatial and temporal data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call