Abstract

Handwritten digit recognition has remained a topic of interest to computer vision scientists. Its origination precedes the emergence of the machine as it is a crucial component of the digital transformation of the majority of institutions in numerous fields. With the uprising of machine models, choosing a satisfactory and fit algorithm for this multi-class (0-9) classification problem became challenging. This paper aims to compare seven machine learning algorithms in terms of their performance metrics in recognizing handwritten digits employing two datasets. The - Nearest Neighbors (kNN), Support Vector Machine (SVM), Logistic Regression, Neural Network, Random Forest (RF), Naive Bayes, and Decision Tree models are accordingly evaluated concerning the Area Under the Curve (AUC), accuracy (ACC), F1-score (F1), precision (PREC), and recall (REC). The widely used Modified National Institute of Standards and Technology database (MNIST) dataset and the Handwritten Digit Classification dataset (HDC) have been the providers of the images on which this research is conducted. The results confirm that the Neural Networks model is a great classifier for this problem; however, it presents similar results to other machine learning classifiers in several cases. Therefore, this paper does not provide an absolute choice of a classifier for the handwritten digit recognition problem but rather explains the reason behind the performance of each model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call