Abstract

Sign language is an important communication way to convey information among the deaf community, and it is primarily used by people who have hearing or speech impairments. Besides, sign language represents a direct Human-Computer-Interaction (HCI) similar to voice commands. Therefore, the purpose of this study is to investigate and develop a system for American Sign Language (ASL) alphabet recognition using convolutional neural networks. Our proposal is based on semantic similarity learning using Siamese Convolutional Neural Network to reduce the intra-class variation and inter-class similarity among sign images in a Euclidean space. The results of the siamese architecture applied to the ASL alphabet dataset outperform previous works found in the literature. From these results, using t-SNE visualization, we demonstrate that our hypothesis is correct; the ASL recognition improves when increasing the similarity among encoding of the images belonging to the same class and reducing it otherwise.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.