Abstract

Abstract: Millions of people with speech and hearing impairments communicate with sign languages every day. For hearingimpaired people, gesture recognition is a natural way of communicating, much like voice recognition is for most people. In this study, we look at the issue of translating/converting sign language to text and propose a better solution based on machine learning techniques. We want to establish a system that hearing-impaired people may utilise in their everyday lives to promote communication and collaboration between hearing-impaired people and people who aren't trained in American Sign Language (ASL). To develop a deep learning model for the ASL dataset, we'll use a technique called Transfer Learning in combination with Data Augmentation. Keywords: Sign language, machine leaning, Transfer learning, ASL, Inception v3

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.