Abstract

Handwritten documents possess immense significance in domains such as law, history, and administration. However, they are vulnerable to forgery, which can undermine their credibility and reliability. This paper aims to establish a dependable technique for identifying altered text in handwritten document images, even in scenarios with high levels of noise and blur. Our study investigates 10 distinct categories of handwritten text that have been altered through various forgery operations. The suggested approach employs the deep neural architectures VGG16 and Resnet50 as feature extractors. The architecture comprises three parts: Feature extraction using individual models, a feature fusion layer, and a classification layer. Initially, we optimize the training process and feature extraction using VGG16 and ResNet50. The feature vectors obtained from both models are then fused together in the feature fusion layer and input into the classification layer for the classification task. Experiments are conducted on a custom-created dataset as well as benchmark datasets including ICPR FDC, IMEI Forged Number, and Kundu to demonstrate that the proposed method is superior to existing approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call