Abstract

Sign language plays a pivotal role as a primary means of communication for individuals with hearing and speech impairments. Given their limited auditory and vocal communication abilities, these individuals heavily rely on visual cues, such as body language and hand gestures, to convey their emotions and thoughts in everyday social interactions. Sign language primarily consists of characters (letters) and numerals (numbers). This study introduces an innovative hybrid methodology for automated sign language identification, combining Temporal Convolutional Neural Network (TCNN) and a Custom Convolutional Neural Network (CCNN). The effectiveness of this system was rigorously evaluated using three distinct benchmark datasets that encompass isolated letters and digits. These datasets are comprehensive and publicly accessible resources covering both British and American sign languages. The proposed CNN-TCN model comprises various phases, including data collection, preprocessing (involving labeling, normalization, and frame extraction), feature extraction using CCNN, and sequence modeling through TCNN. The experimental results clearly demonstrate the remarkable performance of the proposed system, with accuracy, precision, recall, and F1 scores reaching impressive levels of 95.31%, 94.03%, 93.33%, and 93.56%, respectively, across the three diverse datasets. These outcomes serve as compelling evidence of the CNN-TCN method’s viability and effectiveness in the realm of sign language recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call