Abstract

Abstract: Deaf and hard-of-hearing persons, as well as others who are unable to communicate verbally, utilise sign language to communicate within their communities and with others. Sign languages are a set of preset languages that communicate information using a visual-manual modality. The dilemma of real-time finger-spelling recognition in Sign Language is discussed. We gathered a dataset for identifying 36 distinct gestures (alphabets and numerals) and a dataset for typical hand gestures in ISL created from scratch using webcam images. The system accepts a hand gesture as input and displays the identified character on the monitor screen in real time. This project falls under the category of human-computer interaction (HCI) and tries to recognise multiple alphabets (a-z), digits (0-9) and several typical ISL hand gestures. To apply Transfer learning to the problem, we used a Pre-Trained SSD Mobile net V2 architecture trained on our own dataset. In the vast majority of situations, we constructed a robust model that consistently classifies Sign language. Many studies have been done in the past in this area employing sensors (such as glove sensors) and other image processing techniques (such as edge detection technique, Hough Transform, and so on), but these technologies are quite costly, and many people cannot afford them. During the study, various human-computer interaction approaches for posture recognition were investigated and evaluated. The optimum solution was determined to comprise a set of image processing approaches with Human movement categorization. Without a controlled background and low light, the system can detect chosen Sign Language signs with an accuracy of 70-80%. As a result, we're creating this software to assist such folks because it's free and simple to use. However, aside from a small group of people, not everyone is familiar with sign language, and they may need an interpreter, which may be cumbersome and costly. This research intends to bridge the communication gap by building algorithms that can anticipate alphanumeric hand motions in sign language in real time. The main goal of this research is to create a computer-based intelligent system that will allow deaf persons to interact effectively with others by utilising hand gestures. Keywords: Pre-Trained SSD Mobile net V2, Sign Language , HCI

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call