Abstract

Nowadays, Sign Language Recognition (SLR) plays a significant impact in the disabled community because it is utilized as a learning tool for everyday tasks like interaction, education, training, and human activities. All three of these languages—Arabic, Persian, and Kurdish—share the same writing system, called the Arabic script. In order to categorize sign languages written in the Arabic alphabet, this article employs convolutional neural networks (CNN) and transfer learning (mobileNet) methods. The study’s primary goal is to develop a common standard for alphabetic sign language in Arabic, Persian, and Kurdish. Different activation functions were used throughout the model’s extensive training on the ASSL2022 dataset. There are a total of 81857 images included in the collection, gathered from two sources and representing the 40 Arabic-script-based alphabets. As can be seen from the data obtained, the proposed models perform well, with an average training accuracy of 99.7% for CNN and 99.32% for transfer learning. When compared to other research involving languages written in the Arabic script, this one achieves better detection and identification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.