Abstract


 Sign language is a language used to communicate by utilizing gestures and facial expressions. This study focuses on classification of Bahasa Isyarat Indonesia (BISINDO). There are still many people who have difficulty communicating with the deaf people. This study builds video-based translator system using Convolutional Neural Network (CNN) with transfer learning which is commonly used in computer vision especially in image classification. Transfer learning used in this study are a MobileNetV2, ResNet50V2, and Xception. This study uses 11 different commonly used vocabularies in BISINDO. Predictions will be made in real-time scenario using a webcam. In addition, the system given good results in the experiment with an interaction approach between one pair of deaf and normal people. From all the experiments, it was found that the Xception architectures has the best F1 Score of 98.5%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.