Abstract

In recent years there has been a significant increase in images and videos circulating in social networks and media, edited with different techniques, including colorization. This has a negative impact on the forensic field because it is increasingly difficult to discern what is original content and what is fake. To address this problem, we propose two models (a custom architecture and a transfer-learning-based model) based on CNNs that allows a fast recognition of the colorized images (or videos). In the experimental test, the effect of three hyperparameters on the performance of the classifier were analyzed in terms of HTER (Half Total Error Rate). The best result was found for the Adam optimizer, with a dropout of 0.25 and an input image size of 400 × 400 pixels. Additionally, the proposed models are compared with each other in terms of performance and inference times and with some state-of-the-art approaches. In terms of inference times per image, the proposed custom model is 12x faster than the transfer-learning-based model; however, in terms of precision (P), recall and F1-score, the transfer-learning-based model is better than the custom model. Both models generalize better than other models reported in the literature.

Highlights

  • Image and video are some of the most used forms of communication thanks to the evolution of mobile technologies and the appearance of smartphones and social networks such as Facebook and Instagram

  • It is estimated that in 2020 more than 1.4 billion pictures were taken [1], which could be edited for different uses such as entertainment, as in the film and advertising sectors. Tools such as Photoshop, Affinity Photo, and Paintshop allow for simple, manual image editing without a trace visible to the human eye. Another editing approach is the automatic generation of tampered data through deep learning algorithms with CNNs (Convolutional Neural Network) [2] or GANs (Generative Adversarial Networks) [3]

  • The first part is related to the impact of the dataset, the second and third parts are focused on the impact of input image size, dropout and optimizer

Read more

Summary

Introduction

It is estimated that in 2020 more than 1.4 billion pictures were taken [1], which could be edited for different uses such as entertainment, as in the film and advertising sectors Tools such as Photoshop, Affinity Photo, and Paintshop allow for simple, manual image editing without a trace visible to the human eye. Among the methods of image and video editing that negatively impact the forensic field are: CAompoyn/mg othvee:mcoetnhsoisdtsinogf oimf caogpeyaindg vaipdaeroteodfithinegimthagt eneagnadtipvaeslytinimg pitaoctvtehrethfoerseanmsiec field iamrea:ge In this way, a specific area of the image can be hidden Some hand-crafted approaches such as the Fake Colorized Image Detection (FCIDHIST and FCID-FE) methods, which are based on histograms and feature coding, highlight the problem of generalization, i.e., they have a significant decrease in performance between the results of internal and external validation [17].

The Proposed Custom Model
The Proposed Transfer-Learning-Based Model
Experiments
Evaluation Metrics
Experimental Hyperparameters of the Custom Model and the VGG-16-Based Model
Dataset and Hyperparameter Selection
Impact of the Dataset
Impact of Hyperparameters in the Custom Model
Impact of the Optimizer in the VGG-16-Based Model
Method
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.