Unmanned systems play a pivotal role in military surveillance, critical infrastructure protection, law enforcement, search and rescue operations, and border security, showcasing their multifaceted importance across diverse applications. Video fraud detection is integral to multimedia security, where our task involves the precise identification of modified segments within video sequences. Current approaches to video fraud detection often rely on manual feature selection and models tailored to detect specific tampering types, such as copy-move or splicing. The general representation powers of deep learning models and the connection of multiple forensic characteristics are still not fully explored. This research uses a convolutional neural network (CNN) model to identify copy-move video forgeries. Copy-move forgery is a type of video tampering whereby a portion of the video is copied and pasted somewhere different in the same video to cover an essential video characteristic. The method that is being proposed involves dividing the video into individual frames, extracting features from each frame by using a CNN model that has already been trained, and then utilizing these features to train a new CNN model that would classify each frame as either legitimate or fabricated. The proposed method effectively detects copy-move video forgery with an exceptionally high accuracy rate, exceeding current methods of accuracy and computational effort. The proposed method outperformed all other approaches on the SULFA, GRIP, and VTD datasets. The model's accuracy was 85.42 %, 86.16 %, and 81.87 %, respectively, with the shortest times recorded being 9.6 sec, 11.4 sec, and 13.7 sec, respectively. Consequently, specialists can employ the suggested method as a machine-learning instrument for detecting fake videos in real-time.
Read full abstract