Abstract

Sign language is a communication system that consists of a set of written symbols or characters for speech and hearing-impaired society to communicate with other people. Understanding sign language is challenging because it requires memorizing hand poses and gestures. This suggests a demand for an automatic sign language recognition system that allows everyone to understand this language. In this research, we have used the Convolutional Neural Network (CNN) Architecture and Tensorflow library to build the model image of classification. Indonesian Sign Language (BISINDO) dataset is used as the data source, which contains 2659 images of Indonesian Sign Language (BISINDO) twenty-six (26) letter categories. The images are divided into training and validation datasets. The experimental results show that the model has achieved an accuracy of 96.67% on the training dataset, and an accuracy of 100% for the validation dataset. In the image classification phase, we uploaded multiple images of alphabet characters and got the result of 100% accuracy for each alphabet character.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.