Abstract

Now, imagine a world where words are not spoken, and emotions are not expressed. For the Deaf community, this is not a mere thought experiment but a daily reality. Indeed, they are forced to communicate primarily through sign language e, therefore, regularly feel alienated in a society where words are the primary tool for translating thoughts and feelings. Under such circumstances, most Deaf and hard-of-hearing people are struggling to communicate the most effectively. They usually opt for interpreters or their own signaling, but these alternatives are not the most efficient or effective choices. While signaling is the most intuitive and meaningful alternative, its grammar and semantic variations make it impossible to comprehend for those people who are not within the culture. Therefore, we have also developed a software prototype that could interpret sign language automatically. This paper presents a new method to recognize Indian sign language alphabets A-Z and digits (0-9) in a real- time video feed by employing the Bag of Visual Words model (BOVW). Furthermore, it not only predicts the labels of the signs but also gives the output as text and speech. The segmentation process here comprehends skin color detection and background subtraction. Speeded Up Robust FeaturesSurf features have been extracted again from the images, and histograms are built to map the signs to text labels. Furthermore, we have utilized algorithms such as Support Vector Machine (SVM) and Convolutional Neural Networks (CNN) for binary classification. Finally, we have established an interactive Graphical User Interface (GUI to make this process more user-friendly

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call