Abstract

Identifying writers using their handwriting is particularly challenging for a machine, given that a person’s writing can serve as their distinguishing characteristic. The process of identification using handcrafted features has shown promising results, but the intra-class variability between authors still needs further development. Almost all computer vision-related tasks use Deep learning (DL) nowadays, and as a result, researchers are developing many DL architectures with their respective methods. In addition, feature extraction, usually accomplished using handcrafted algorithms, can now be automatically conducted using convolutional neural networks. With the various developments of the DL method, it is necessary to evaluate the suitable DL for the problem we are aiming at, namely the classification of writer identification. This comparative study evaluated several DL architectures such as VGG16, ResNet50, MobileNet, Xception, and EfficientNet end-to-end to examine their advantages to offline handwriting for writer identification problems with IAM and CVL databases. Each architecture compared its respective process to the training and validation metrics accuracy, demonstrating that ResNet50 DL had the highest train accuracy of 98.86%. However, Xception DL performed slightly better due to the convergence gap for validation accuracy compared to all the other architectures, which were 21.79% and 15.12% for IAM and CVL. Also, the smallest gap of convergence between training and validation accuracy for the IAM and CVL datasets were 19.13% and 16.49%, respectively. The results of these findings serve as the basis for DL architecture selection and open up overfitting problems for future work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call