Abstract
Deaf and hard of hearing (DHH) is a communication impairment that affects millions of people worldwide, hindering communication through the vocal cavity and requiring the use of sign language. This communication gap often leads to limited access to education and job opportunities. While artificial intelligence (AI)-driven technologies have been investigated to address this issue, there has been no study focusing on the intelligent and automatic translation of American sign gestures to text in low-resource languages (LRL) specifically in Nigeria. To address this gap, we propose a novel end-to-end framework for translating American Sign Language (ASL) to LRLs such as Hausa, Ibo, and Yoruba in Nigeria. Our framework employs a Transformer-based model for ASL-to-Text generation and a translation model known as no language left behind for translating the generated text from English to LRL. We evaluated the performance of the ASL-to-Text model using BLEU scores, which resulted in BLEU-1, BLEU-2, BLEU-3, and BLEU-4 scores of 18.55, 8.99, 4.99, and 2.96, respectively. Qualitative analysis of the framework shows that the participants were able to comprehend the translated text and were also satisfied with both the ASL-to-Text and Text-to-LRL models. Our proposed framework demonstrates the potential for AI-driven technologies to foster inclusivity in sociocultural interactions and education, especially for people with DHH in low-resource settings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.