Abstract

AbstractThe biggest challenge the deaf and dumb group faces is that individuals around them do not understand sign language, which they use to communicate with one another. Written communication is slower than face‐to‐face contact, despite the fact that it can be used. Many sign languages have been developed around the world because they are more effective in emergency situations than text‐based communication. India in‐spite of having the large deaf population of almost 18 million and having only around 250 trained/untrained; skilled interpreters. The proposed system can utilize a custom convolution neural networks (CCNNs) model to identify hand motions in order to resolve this issue. This system uses a filter to process the hand before sending it through a classifier to identify the type of hand movements. CCNN strategy employs two levels of algorithm to predict and evaluate symbols that are increasingly similar to one another in order to get as close to precisely recognizing the symbol presented as possible. Convolutional neural networks (CNNs) are able to precisely identify a variety of gestures after being trained on large datasets of hand sign photographs. As a result of their frequent usage of many layers of filters and pooling to extract relevant information from the input images, these networks can recognize hand signs with an accuracy rate of 99.95%, which is much greater than previously built models like SIGNGRAPH, SVM, KNN, CNN + Bi‐LSTM, 3D‐CNN and 2D CNN network and 1D CNN skeleton network. The simulation result shows that a suggested CCNN‐based learning approach is useful for hand sign detection and future usage research when compared with existing machine learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call