Abstract

Abstract: People with hearing loss frequently use sign language to interact with their community and communicate with others because it is largely a visual form of communication. It requires the use of manual gestures, nonverbal facial clues, and body motions, unlike spoken language, to express thoughts and convey meaning. The goal of Sign Language Recognition (SLR) is to identify, interpret, and translate these signs into the appropriate speech or text. For those with speech and hearing impairments, sign language is an essential tool for communicating with others and exchanging ideas. This paper proposes a novel method for recognising individual alphabet signals in sign language so that words can be formed from them. This is accomplished through the application of a deep learning network, which can detect the signs and output the corresponding text. Additionally, the recognized individual characters can be sequentially utilized to form words, which can then be converted into voice output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call