Protecting data from management is a significant task at present. Digital images are the most general data representation. Images might be employed in many areas like social media, the military, evidence in courts, intelligence fields, security purposes, and newspapers. Digital image fakes mean adding infrequent patterns to the unique images, which causes a heterogeneous method in image properties. Copy move forgery is the firmest kind of image forgeries to be perceived. It occurs by duplicating the image part and then inserting it again in the image itself but in any other place. If original content is not accessible, then the forgery recognition technique is employed in image security. In contrast, methods that depend on deep learning (DL) have exposed good performance and suggested outcomes. Still, they provide general issues with a higher dependency on training data for a suitable range of hyperparameters. This manuscript presents an Enhancing Copy-Move Video Forgery Detection through Fusion-Based Transfer Learning Models with the Tasmanian Devil Optimizer (ECMVFD-FTLTDO) model. The objective of the ECMVFD-FTLTDO model is to perceive and classify copy-move forgery in video content. At first, the videos are transformed into distinct frames, and noise is removed using a modified wiener filter (MWF). Next, the ECMVFD-FTLTDO technique employs a fusion-based transfer learning (TL) process comprising three models: ResNet50, MobileNetV3, and EfficientNetB7 to capture diverse spatial features across various scales, thereby enhancing the capability of the model to distinguish authentic content from tampered regions. The ECMVFD-FTLTDO approach utilizes an Elman recurrent neural network (ERNN) classifier for the detection process. The Tasmanian devil optimizer (TDO) method is implemented to optimize the parameters of the ERNN classifier, ensuring superior convergence and performance. A wide range of simulation analyses is performed under GRIP and VTD datasets. The performance validation of the ECMVFD-FTLTDO technique portrayed a superior accuracy value of 95.26% and 92.67% compared to existing approaches under GRIP and VTD datasets.
Read full abstract