Hand gestures (HGs) have been extensively acknowledged as a communication standard for the deaf and dumb school system in the Indian sign language (ISL). We suggested an automated Indian sign language recognition (ISLR) method for English in this study. Initially, the hand region is segmented using the Grasshopper optimization algorithm (GOA) based on the skin color model method. The segmentation’s efficacy is further assessed using three techniques: GOA-based skin color detection algorithm (SCDA), particle swarm optimization-based SCDA (PSO-SCDA), and artificial bee colony-based SCDA (ABC-SCDA). Finally, a database for the retrieved motions representing distinct English alphabets is created. For gesture recognition, the system is trained, and a template-based matching approach is used. The classification technique employs the support vector machine (SVM) and the convolution neural network (CNN). Our suggested recognition method achieved the highest accuracy of 97.85% with GOA-SCDA, compared to 89.29% and 93.96% with PSO-SCDA and ABC-SCDA, respectively. Furthermore, CNN outperformed SVM in classification, with a high accuracy of 99.2% and precision of 81.8%.
Read full abstract