Abstract

Individuals with hearing or speech impairments face challenges in communicating with others, requiring special techniques to express their thoughts and feelings. Sign language is an alternative way to communicate using a specific pattern of hand gestures to deliver messages instead of verbal speaking (oral communication). Unfortunately, most people do not know how to use and read sign language. Because of its complexity and many types of sign language worldwide, only well-trained personnel could use it as a communication medium. This research provides a solution in the form of a machine-learning Android-based application designed to recognize sign language captured by a smartphone camera and translate it into Latin characters. The recognition accommodates Convolutional Neural Network (CNN), one of the popular deep learning algorithms. This application recognizes 26 characters of Indonesian Sign Language (Bahasa Isyarat Indonesia/BISINDO) alphabets using MobileNetV3 architecture. To build the data model, dataset images were collected from 5 different models demonstrating 26 BISINDO characters in various lighting, background, and hand gesture position. These dataset images were also generated using image augmentation process to achieve the randomness by adjusting the image rotation, noise, and brightness. Based on the testing result using 6,240 dataset images, the application has 75.38% accuracy in recognizing Indonesian Sign Language alphabets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call