Abstract

The deepfake technology is conveniently abused with the low technology threshold, which may bring the huge social security risks. As GAN-based synthesis technology is becoming stronger, various methods are difficult to classify the fake content effectively. However, although the fake content generated by GANs can deceive the human eyes, it ignores the biological signals hidden in the face video. In this paper, we proposed a novel video forensics method with multidimensional biological signals, which extracting the difference of the biological signal between real and fake videos from three dimensions. The experimental results show that our method achieves 98% accuracy on the main public dataset. Compared with other technologies, the proposed method only extracts fake video information and is not limited to a specific generation method, so it is not affected by synthetic methods and has good adaptability.

Highlights

  • With the rapid advancement of computer vision and digital content processing technology, face tampering is no longer limited to pictures, some deep learning technologies can be utilized to generate human faces in videos, which are very similar to natural face videos taken by using digital cameras, but it is difficult to distinguish them with the naked eyes. e recent research study by Korshunov [1] shows that fake videos can deceive the face recognition system, and some serious security risks, such as fake news, have been raised by them

  • If this technology is abused by criminals, it will cause a serious crisis, and it can even forge the speeches of world leaders, seriously endangering political security. erefore, the forensics of deepfake videos is of great significance

  • We propose a deepfake video forensics method based on multidimensional biological signals

Read more

Summary

Introduction

With the rapid advancement of computer vision and digital content processing technology, face tampering is no longer limited to pictures, some deep learning technologies (e.g., deepfake) can be utilized to generate human faces in videos, which are very similar to natural face videos taken by using digital cameras, but it is difficult to distinguish them with the naked eyes. e recent research study by Korshunov [1] shows that fake videos can deceive the face recognition system, and some serious security risks, such as fake news, have been raised by them.Deepfake technology is the result of scientific and technological progress and the rapid development of artificial intelligence technology, and it has broad application prospects. Deepfake technology is used in entertainment industries such as films, which can save time and labor costs. If this technology is abused by criminals, it will cause a serious crisis, and it can even forge the speeches of world leaders, seriously endangering political security. The forensics method of deepfake video is mainly based on intraframe or interframe information by analyzing the difference between real and fake videos. Recent work shows that heart rate signals can be used to effectively distinguish between real and fake videos [2, 3]. GANs can generate fake content that deceive human eyes, it destroys the original biological signals of the real video, such as heart rate signals.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call