Abstract

The objective of this paper is to compare different classifiers’ recognition accuracy for the 28 Arabic alphabet letters gestured by participants as Sign Language and captured by two depth sensors. The accuracy results of three individual classifiers: (1) the support vector machine (SVM), (2) random forest (RF), and (3) nearest neighbour (kNN), using the original gestured dataset were compared with the accuracy results using an ensemble of the results of each classifier, as recommended by the literature. SVM produced higher overall accuracy when running as an individual classifier regardless of the number of observations for each letter. However, for letters with fewer than 65 observations each, which created a far smaller dataset, RF had higher accuracy than SVM did when using the ensemble approach. Although RF produced higher accuracy results for classes with limited class observation data, the difference between the accuracy results of RF in phase 2 and SVM in phase 1 was negligible. The researchers conclude that such a difference does not warrant using the ensemble approach for this experiment, which adds more processing complexity without a significant increase in accuracy.

Highlights

  • Researchers in the Arab world, as well as researchers worldwide, are always investigating the use of assistive communication tools that could help the hearing-impaired in their daily lives when using their local languages and dialects

  • Research has been done on using sign language recognition systems, limited research has addressed gesture recognition of Arabic Sign Language (ArSL)

  • The research methodology of Al-Masre and Al-Nuaim for gesture recognition used only one classifier (SVM) as a supervised machine learning hand-gesturing model [13] to classify the 28 letters of the Arabic alphabet ―Figure 1.‖ In addition, to overcome the time complexity of interpreting the data for their model, the researchers used the principle component analysis (PCA) algorithm to simplify the large dataset by reducing features

Read more

Summary

INTRODUCTION

Researchers in the Arab world, as well as researchers worldwide, are always investigating the use of assistive communication tools that could help the hearing-impaired in their daily lives when using their local languages and dialects. The research methodology of Al-Masre and Al-Nuaim for gesture recognition used only one classifier (SVM) as a supervised machine learning hand-gesturing model [13] to classify the 28 letters (considered classes) of the Arabic alphabet ―Figure 1.‖ In addition, to overcome the time complexity of interpreting the data for their model, the researchers used the principle component analysis (PCA) algorithm to simplify the large dataset by reducing features. This research used SVM to classify the 28 ArSL letters as in Al-Masre and Al-Nuaim [13], and to overcome the limitation of using the PCA algorithm, the proposed model focused on including all of the features of the collected data while adding a classification step, as recommended by the literature, to produce higher recognition accuracy.

LITERATURE REVIEW
CLASSIFICATION ALGORITHMS
THE PROPOSED MODEL
DISCUSSION AND CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.