Abstract

For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model

Highlights

  • The topic of hand gesture recognition has a special place and active application in computer vision science

  • The task of recognizing a sign language is complicated owing to the dynamism of gestures, where one symbol can be displayed in dynamics

  • Many works have been devoted to gesture recognition, but we have not found any works related to Kazakh Sign Language gesture recognition using modern deep learning algorithms

Read more

Summary

Introduction

The topic of hand gesture recognition has a special place and active application in computer vision science. Machine learning libraries are widely used in the research and analysis of the gesture recognition algorithm – a sign language in this case. Vision-based hand gesture recognition involves several tasks that need to be solved in order to get a good gesture recognition result, for example, hand zooming, reducing noise, changing lighting and view angle, differentiating between similar gestures and working with a complex background. Recognizing hand gestures in images can be very challenging due to the varying conditions of the image captured. The task of recognizing a sign language is complicated owing to the dynamism of gestures, where one symbol can be displayed in dynamics. Many works have been devoted to gesture recognition, but we have not found any works related to Kazakh Sign Language gesture recognition using modern deep learning algorithms

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call