Facial retouching in supporting documents can have adverse effects, undermining the credibility and authenticity of the information presented. This paper presents a comprehensive investigation into the classification of retouched face images using a fine-tuned pre-trained VGG16 model. We explore the impact of different train-test split strategies on the performance of the model and also evaluate the effectiveness of two distinct optimizers. The proposed fine-tuned VGG16 model with “ImageNet” weight achieves a training accuracy of 99.34 % and a validation accuracy of 97.91 % over 30 epochs on the ND-IIITD retouched faces dataset. The VGG16_Adam model gives a maximum classification accuracy of 96.34 % for retouched faces and an overall accuracy of 98.08 %. The experimental results show that the 50 % - 25 % train-test split ratio outperforms other split ratios mentioned in the paper. The demonstrated work shows that using a Transfer Learning approach reduces computational complexity and training time, with a max. training duration of 39.34 min for the proposed model.
Read full abstract