Abstract

Sign Language is a medium of communication for many disabled people. This real-time Sign Language Recognition (SLR) system is developed to identify the words of American Sign Language (ASL) in English and translate them into 5 spoken languages (Mandarin, Spanish, French, Italian, and Indonesian). Combining the study of facial expression with the recognition of Sign Language is an attempt to understand the emotions of the signer. Mediapipe and LSTM with a Dense network are used to extract the features and classify the signs respectively. The FER2013 data set was used to train the Convolutional Neural Network (CNN) to identify emotions. The system was able to recognize 10 words of ASL with an accuracy of 86.33% and translate them into 5 different languages. 4 emotions were also recognized with an accuracy of 73.62%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.