Abstract

AbstractArabic Sign Language (ArSL) is the most utilized for hearing and speech impairments in Arab countries. The recognition system of ArSL could be an innovation to empower communication between the deaf and others. Recent advances in gesture recognition using deep learning and computer vision-based techniques have proved promising. Due to a lack of ArSL datasets, the ArSL dataset was created. The dataset was then expanded using augmentation methods. This paper aims to create an architecture based on both Transfer Learning (TL) models and Recurrent Neural network (RNN) models for recognizing ArSL. The extraction of spatial and temporal data was accomplished by combining TL and RNN models. Furthermore, the hybrid models outperformed current architectures when tested on both the original and augmented datasets. More overall, the highest recognition accuracy of 93.4% was attained.KeywordsArabic sign languageHand gestureVideo analysisTransfer learningRecurrent Neural Network

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call