Abstract

Deaf and dumb Muslims cannot reach advanced levels of education due to the impact of obstruction on their educational attainment. This leads to their inability to learn, recite, and understand the meanings and interpretations of the Holy Qur’an as easily as ordinary people, which also prevents them from applying Islamic rituals such as prayer that require learning and reading the Holy Qur’an. In this paper, we propose a new model for Qur’anic sign language recognition based on convolutional neural networks through data preparation, preprocessing, feature extraction, and classification stages. The proposed model is aimed at recognizing the movements of the Arabic sign language by recognizing the hand gestures that refer to the dashed Qur’anic letters in order to help the deaf and dumb learn their Islamic rituals. The experiments have been conducted on a part of a large Arabic sign language dataset called ArSL2018, which represents the 14 dashed letters in the Holy Qur’an, so that this part contains only 24,137 images. The experimental results demonstrate that the proposed model performs better than the other existing models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call