Abstract

In recent years, the role of pattern recognition in systems based on human computer interaction (HCI) has spread in terms of computer vision applications and machine learning, and one of the most important of these applications is to recognize the hand gestures used in dealing with deaf people, in particular to recognize the dashed letters in surahs of the Quran. In this paper, we suggest an Arabic Alphabet Sign Language Recognition System (AArSLRS) using the vision-based approach. The proposed system consists of four stages: the stage of data processing, preprocessing of data, feature extraction, and classification. The system deals with three types of datasets: data dealing with bare hands and a dark background, data dealing with bare hands, but with a light background, and data dealing with hands wearing dark colored gloves. AArSLRS begins with obtaining an image of the alphabet gestures, then revealing the hand from the image and isolating it from the background using one of the proposed methods, after which the hand features are extracted according to the selection method used to extract them. Regarding the classification process in this system, we have used supervised learning techniques for the classification of 28-letter Arabic alphabet using 9240 images. We focused on the classification for 14 alphabetic letters that represent the first Quran surahs in the Quranic sign language (QSL). AArSLRS achieved an accuracy of 99.5% for the K-Nearest Neighbor (KNN) classifier.

Highlights

  • Sign language (SL) develops naturally like the languages spoken within the deaf community, and each sign language has its own rules; in addition, the understanding of sign language outside the deaf community is almost non-existent or missing and communication is very difficult between deaf people and ordinary individuals

  • We will present the results of the proposed system to recognize alphabet for Arabic sign language where the proposed system designed and implemented was alphabetic Arabic sign language recognition systems (AArSLRS); the system translates and recognizes gestures using one or both hands. e signers are not required to wear any glove-based sensors or use any devices to interact with the system

  • It is clear from the above table that AArSLRS achieves higher classification accuracy when using cityblock distance measurement while ready-made software such as WEKA is used for Euclidean distance measurement only. e possibility of changing the distance used in the AArSLRS proposed system can be considered an important feature that is not available in ready-made software

Read more

Summary

Introduction

Sign language (SL) develops naturally like the languages spoken within the deaf community, and each sign language has its own rules; in addition, the understanding of sign language outside the deaf community is almost non-existent or missing and communication is very difficult between deaf people and ordinary individuals. Some deaf children are born to ordinary parents, and a language gap exists within the family. There is no standardized form of sign languages, which makes teaching a difficult challenge for a deaf person [1]. Deaf people suffer from an understanding of the teachings of a true religion. In the case of Arabic Sign Language (ArSL), there is no standardized language coordination, which makes learning or translating it a difficult challenge for Arab deaf communities [2]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.