Due to high demand for inclusive technologies and the rapid electrons communication age, there is growing interest in automatic audio to sign language conversion systems. This article has given the relevant way to solve this problem by converting spoken language into sign languages in real time using machine translation that can shape communication path for deaf and dumb community. In our method, we use Natural Language Processing (NLP) to transcribe audio into text and then deep learning for sign language generation. The system recognizes keywords and important phrases in the transcribed text, associates them with corresponding sign language gestures, and presents such signs animations using an avatar-based visualization. The recent accomplishment is different from previous systems that work directly and only translate a sentence to its corresponding sign, rather focusing on the contextual analysis for better conveying of meaning. Key Words: Audio to Sign Language Translation, Speech- to- Text Conversion, Natural Language Processing (NLP), Machine Translation, Deep Learning, Gesture Recognition, Avatar-based Sign Language Visualization, Accessibility, Multilingual Sign Language
Read full abstract