Abstract

Sign language plays a pivotal role as a primary means of communication for individuals with hearing and speech impairments. Given their limited auditory and vocal communication abilities, these individuals heavily rely on visual cues, such as body language and hand gestures, to convey their emotions and thoughts in everyday social interactions. Sign language primarily consists of characters (letters) and numerals (numbers). This study introduces an innovative hybrid methodology for automated sign language identification, combining Temporal Convolutional Neural Network (TCNN) and a Custom Convolutional Neural Network (CCNN). The effectiveness of this system was rigorously evaluated using three distinct benchmark datasets that encompass isolated letters and digits. These datasets are comprehensive and publicly accessible resources covering both British and American sign languages. The proposed CNN-TCN model comprises various phases, including data collection, preprocessing (involving labeling, normalization, and frame extraction), feature extraction using CCNN, and sequence modeling through TCNN. The experimental results clearly demonstrate the remarkable performance of the proposed system, with accuracy, precision, recall, and F1 scores reaching impressive levels of 95.31%, 94.03%, 93.33%, and 93.56%, respectively, across the three diverse datasets. These outcomes serve as compelling evidence of the CNN-TCN method’s viability and effectiveness in the realm of sign language recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.