Abstract

The number of documented deaf people continues to increase. To communicate with each other, the deaf use sign language. The problem arises when Muslims with hearing impairment or deafness need to recite the Al-Quran. Muslims recite Al-Quran using their voice, but for the deaf, there are no available means to do the reciting. Thus, learning hijaiyah letters using finger gestures is considered important to develop. In this study, we use the recognition of hijaiyah letters based on pictures as the learning model. The real-time-based recognition then uses the learning model. This study uses 4 CNN pre-trained models, namely MnetV2, VGG16, ResNet50, and Xception. The learning process shows that MnetV2, VGG16, and Xception reach the accuracy limit of 99.85% in 2, 3, and 11 s, respectively. Meanwhile, ResNet50 cannot reach the accuracy limit after processing 100 s. ResNet50 achieves 82.12% accuracy. The testing process shows that MnetV2, VGG16, and ResNet50 achieve 100% precision, recall, f1-score, and accuracy. ResNet50 shows figures 81.55%, 86.04%, 82.04%, and 82.58%. The implementing process of the learning outcomes from MnetV2 shows good performance for recognizing finger shapes in real-time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call