Abstract

Sign language is a form of communication commonly used by people with hearing impairment or people with speech impediments. Not all ordinary people understand the language. The translation of sign language into the alphabet/text automatically will facilitate the communication of the deaf with ordinary people. This research aims to develop a sign language recognition system that can process input from video data using You Only Look Once (YOLO) in real-time. YOLO is an object detection method based on Convolutional Neural Network (CNN), which is accurate and fast. Retraining the Yolov3 pre-trained model is performed with adjustments to the number of channels and classes according to the sign language recognition requirement. In this research, we collect datasets independently based on the Indonesian Sign Language (BISINDO). In the experiment using image data, the system achieves 100% precision, recall, accuracy, and F1 score. While using video data, the system’s performance gets precision 77.14%, recall 93.1%, accuracy 72.97%, and F1 score 84.38%, with a speed of 8 fps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call