Abstract

In the digital era, the widespread use of video content has led to the rapid development of video editing technologies. However, it has also raised concerns about the authenticity and integrity of multimedia content. Video splicing forgery has emerged as a challenging and deceptive technique used to create fake video objects, potentially for malicious purposes such as deception, defamation, and fraud. Therefore, the detection of video splicing forgery has become critically important. Nevertheless, due to the complexity of video data and a lack of relevant datasets, research on video splicing forgery detection remains relatively limited. This paper introduces a novel method for detecting video object splicing forgery, which enhances detection performance by deeply exploring inconsistent features between different source videos. We incorporate various feature types, including edge luminance, texture, and video quality information, and utilize a joint learning approach with Convolutional Neural Network (CNN) and Vision Transformer (ViT) models. Experimental results demonstrate that our method excels in detecting video object splicing forgery, offering promising prospects for further advancements in this field.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call