Abstract
People with hearing and speech impairments often have difficulty communicating with the general public due to a lack of understanding of sign language. This results in social isolation and barriers to accessing information and education. The development of sign language translator technology is expected to improve communication and independence of people with disabilities. The methodology used in this research includes data collection through literature studies, questionnaires, and documentation. The BISINDO data processing algorithm in this research uses the Long short-term memory (LSTM) method to detect skeletons on the hands, face and body. The system implementation uses kinect to capture real-time hand movements. System development uses the Agile method to ensure functionality and fulfillment of user needs. From the evaluation results by testing system performance using confusion matrix by calculating accuracy, recall, precision and F1-Score values. As well as datasets taken in realtime with a total of 90 data, each consisting of 30 actions of my sign language, 30 actions of good sign language, 30 actions of iloveyou sign language. The results show that the system has an accuracy value of 1.0, recall 1.0, precision 1.0 and F-1 Score 1.0 with LSTM algortima, epoch 140 and response data that shows positive towards the system
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have