Abstract
The deaf community in India relies heavily on the Sign Language (SL) as a means of communication. But because so few people are fluent in SL, there is a communication gap between the hearing and deaf communities. The Sign Language Translator (SLT) in this paper employs MediaPipe and LSTM to convert Sign Language (SL) to text and back again. The suggested approach first uses MediaPipe to extract hand movements and facial emotions from movies of SL signals. The LSTM model, which is trained using a sizable dataset of SL signs and their related text labels, is then fed these characteristics. The input feature sequence is processed by the LSTM model, which also creates the matching text labels. A dataset of SL signs and the text labels that go with them was gathered in order to train and test the proposed system. The outcomes demonstrate that the suggested approach performs extremely well when converting SL signs to text and vice versa. The system is useful for usage in real-life scenarios since it can recognise and translate many indications in succession. Overall, a bridge between the hearing and deaf communities in India can be created by the proposed Sign Language Translator system, enabling seamless inclusion and communication. By allowing them to interact with the hearing world more effectively, it has the potential to greatly raise the quality of life for the deaf people in India.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Research Journal of Computer Science
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.