Abstract

Bangla, being the fifth most spoken language in the world has its own distinct sign language with two methods (one-handed and two-handed) of representation. However, a standard automatic recognition system of Bangla sign language (BdSL) is still to be achieved. Though widely studied and explored by researchers in the past years, certain unaddressed issues like identifying unseen signs and both types of BdSL or lack of evaluation of the models in versatile environmental conditions demarcate the real-world implementation of the automatic recognition of BdSL. To find a probable solution to the shortcomings in the existing works, this paper proposes two approaches based on conventional transfer learning and contemporary Zero-shot learning (ZSL) for automatic BdSL alphabet recognition of both seen and unseen data. The performance of the proposed system is evaluated for both types of Bangla sign representations as well as on a large dataset with 35,149 images from over 350 subjects, varying in terms of backgrounds, camera angle, light contrast, skin tone, hand size, and orientation. For the ZSL approach, a new semantic descriptor dedicated to BdSL is created and a split of the dataset into seen and unseen classes is proposed. Our model achieved 68.21%, 91.57%, and 54.34% of harmonic mean accuracy, seen accuracy, and zero-shot accuracy with six unseen classes respectively. For the transfer learning-based approach, we found pre-trained DenseNet201 architecture to be the best performing feature extractor and Linear Discriminant Analysis as the best classifier with an overall accuracy of 93.68% on the large dataset after conducting quantitative experimentation on 18 CNN architectures and 21 classifiers. The satisfactory result from our models supports its very probative potential to serve extensively for the hearing and speaking impaired community.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call