Abstract

Sign Language allows mute people to communicate, problem occurs when a conversationalist fails to understand it. Despite efforts to address this problem, an effective solution is not yet found. In this work, Convolutional Neural Network was trained on two different datasets separately- Binary and Red Blue Green )RGB(, each contains 25,900 images of Nigerian Sign Language. A Deep Neural pre-trained module was used to detect hand gestures in the video feed which tackled the issue of complex backgrounds, also showed excellent detection in dimly lit areas. The accuracies of (98.95%, 76%) and (98.87%, 98.85%) were obtained respectively on the training and the validation sets. The real time system developed implemented both models as a single system which makes it a unique one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call