Abstract
Recently, forensics has encountered a new challenge with video surveillance object forgery. This type of forgery combines the characteristics of popular video copy-move and splicing forgeries, failing most existing video forgery detection schemes. In response to this new forgery challenge, this paper proposes a Video Surveillance Object Forgery Detection (VSOFD) method including three parts components: (i) The proposed method presents a special-combined extraction technique that incorporates Temporal-Spatial-Frequent (TSF) perspectives for TSF feature extraction. Furthermore, TSF features can effectively represent video information and benefit from feature dimension reduction, improving computational efficiency. (ii) The proposed method introduces a universal, extensible attention-based Convolutional Neural Network (CNN) baseline for feature processing. This CNN processing architecture is compatible with various series and parallel feed-forward CNN structures, considering these structures as processing backbones. Therefore, the proposed CNN architecture benefits from various state-of-the-art structures, leading to addressing each independent TSF feature. (iii) The method adopts an encoder-attention-decoder RNN framework for feature classification. By incorporating temporal characteristics, the framework can further identify the correlations between the adjacent frames to classify the forgery frames better. Finally, experimental results show that the proposed network can achieve the best F1 = 94.69 % score, increasing at least 5–12 % from the existing State-Of-The-Art (SOTA) VSOFD schemes and other video forensics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.