Abstract

Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL, and Cambridge Hand Gesture Recognition datasets where, the proposed model reached an accuracy of 97.84%, 98.86%, and 98.22%, respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call