Abstract

Sign language is a communication system that consists of a set of written symbols or characters for speech and hearing-impaired society to communicate with other people. Understanding sign language is challenging because it requires memorizing hand poses and gestures. This suggests a demand for an automatic sign language recognition system that allows everyone to understand this language. In this research, we have used the Convolutional Neural Network (CNN) Architecture and Tensorflow library to build the model image of classification. Indonesian Sign Language (BISINDO) dataset is used as the data source, which contains 2659 images of Indonesian Sign Language (BISINDO) twenty-six (26) letter categories. The images are divided into training and validation datasets. The experimental results show that the model has achieved an accuracy of 96.67% on the training dataset, and an accuracy of 100% for the validation dataset. In the image classification phase, we uploaded multiple images of alphabet characters and got the result of 100% accuracy for each alphabet character.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call