Abstract

ABSTRACT Deaf and dumb Muslims face educational barriers. They can't read, recite, or comprehend the Holy Qur'an, hence they can't practise Islamic ceremonies. This study proposes a CNN-based Qur'anic sign language recognition methodology. First, photos are used to train for dynamic and static gesture recognition. Second, preparing images diversifies datasets. Finally, CNN-based deep learning models extract and classify features. To teach the deaf and dumb Islamic ceremonies, the programme recognises Arabic sign language hand motions referring to dashed Qur'anic letters. Only 24,137 photos of the Holy Qur'an's 14 dashed letters were used in the trials from ArSL2018, a huge Arabic sign language collection. SMOTE raises training and testing accuracy to 98.31% and 97.67%, respectively, whereas the proposed model reaches 98.05% and 97.13%. RMU obtains 98.66% and 97.52% training and testing accuracy, whereas RMO achieves 98.37% and 97.36%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call