Determining the script of historical manuscripts is pivotal for understanding historical narratives, providing historians with vital insights into the past. In this study, our focus lies in developing an automated system for effectively identifying the script of historical documents using a deep learning approach. Leveraging the ClAMM dataset as the foundation for our system, we initiate the system with dataset preprocessing, employing two fundamental techniques: denoising through non-local means denoising and binarization using Canny-edge detection. These techniques prepare the document for keypoint detection facilitated by the Harris-corner detector, a feature-detection method. Subsequently, we cluster these keypoints utilizing the k-means algorithm and extract patches based on the identified features. The final step involves training these patches on deep learning models, with a comparative analysis between two architectures: Convolutional Neural Networks (CNN) and Vision Transformers (ViT). Given the absence of prior studies investigating the performance of vision transformers on historical manuscripts, our research fills this gap. The system undergoes a series of experiments to fine-tune its parameters for optimal performance. Our conclusive results demonstrate an average accuracy of 89.2 and 91.99% respectively of the CNN and ViT based proposed framework, surpassing the state of the art in historical script classification so far, and affirming the effectiveness of our automated script identification system.
Read full abstract