Abstract

Every average person has the ability to see, hear, and respond to their surroundings. Some unfortunate people are without this significant blessing. These people primarily include the deaf and dumb, who interact with others by using sign language. Yet, since not all common people can understand their sign language, communicating with regular people is a significant limitation for them. Also, this will make it difficult for members of the deaf and dumb communities to engage with others, especially when they want to participate in educational, social, and professional settings. The goals of this research are to create a sign language translation system for hearing or speech challenged individuals utilising deep learning and neural networks to help those who are deaf or have trouble speaking or hearing communicate with others. The model utilised in this classification assignment is a pretty simple Convolutional Neural Network implementation for the methodology (CNN). This exploits the convolutional property, which was primarily developed for the analysis of visual imagery. Three layered CNNs were trained and tested in real time using segmented RGB hand motions. With the help of a personal device, such as a laptop webcam, simple photos of the hand were used to construct the image dataset for each gesture. I was able to get training accuracy of roughly 89% and testing accuracy of 98.5% with this CNN model. Keywords : Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Gesture Recognition, Open CV, TensorFlow

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call