Abstract

The recognition of sign language has significant importance as an application that addresses the communication challenges faced by those within the Deaf and hard-of-hearing populations. This study introduces a methodology for the identification of sign language using spatio-temporal attributes, specifically concentrating on two distinct sign language datasets: the CSL (Chinese Sign Language) alphabet and the ArSL (Arabic Sign Language) alphabet. The study's objective is to create a powerful recognition system capable of reliably identifying sign language motions. A combination of spatio-temporal characteristics collected from video sequences of sign language motions is used in the suggested technique. These characteristics capture the spatial layout as well as the temporal dynamics of the motions, allowing the model to recognise signals with more precision. Experiments were carried out on the CSL and ArSL alphabet datasets to validate the method's efficacy. The findings of the trials are encouraging. The suggested system recognized CSL alphabet signs with an accuracy of 90.87% and ArSL alphabet signs with an accuracy of 89.46%. These high accuracy rates show the power of the spatiotemporal feature-based method for sign language identification. The system's success on two different sign languages implies that it is adaptable and useful in a wide variety of sign language applications. This study contributes to the evolution of assistive technology, making sign language recognition more accessible and efficient for those who communicate using sign language. Furthermore, the findings pave the way for additional research into spatiotemporal feature-based approaches in sign language recognition, with potential applications in real-world settings such as sign language interpretation and communication support.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call