Abstract
In the field of multimodal communication, sign language is and continues to be, one of the most understudied areas. Thanks to the recent advances in the field of deep learning, there are far-reaching implications and applications that neural networks can have for sign language mastering. This paper describes a method for ASL alphabet recognition using Convolutional Neural Networks (CNN), which allows to monitor user’s learning progress. American Sign Language (ASL) alphabet recognition by computer vision is a challenging task due to the complexity in ASL signs, high interclass similarities, large intraclass variations, and constant occlusions. We produced a robust model that classifies letters correctly in a majority of cases. The experimental results encouraged us to investigate the adoption of AI techniques to support learning of a sign language, as a natural language with its own syntax and lexicon. The challenge was to deliver a mobile sign language training solution that users may adopt during their everyday life. To satisfy the indispensable additional computational resources to the locally connected end- user devices, we propose the adoption of a Fog-Computing Architecture.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Humanized Computing and Communication
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.