Abstract

In this study, we investigate the effectiveness of ResNet, a deep neural network architecture, for a deep learning approach to address the problem of printed document identification. ResNet is known for its ability to handle the vanishing gradient problem and learn highly representative features. Multiple variations of ResNet have been applied, including ResNet50, ResNet101, and ResNet152, which provide the backbone architecture of our classification model and are trained on a comprehensive dataset of microscopic printed images containing some microscopic printing patterns from various source printers. We also incorporate Mix-up augmentation, a technique that generates virtual training samples by interpolating pairs of images and labels, to further enhance the performance and generalization capability of the model. The experimental results showed that ResNet101 and ResNet152 variants outperformed in accurately distinguishing printer sources based on microscopic printed patterns. We developed a mobile app to test the feasibility of our findings in practice. In conclusion, this study aims to lay the groundwork for creating a sufficiently pre-trained model with accurate performance of identification that can be deployed on mobile devices to detect the printed sources of documents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call